2–8 Nov 2025
TIFR Mumbai
Asia/Kolkata timezone

Reinforcement learning (RL) methods for finding fermion propagators

4 Nov 2025, 18:00
1h 30m
HBA Foyer

HBA Foyer

Speaker

Ananth Ashish Garg Balasubramanian (Indian Institute of Science)

Description

The conjugate gradient is the standard technique for computing propagators in lattice QCD. Preconditioning the Dirac Operator makes this method faster. The goal of our project is to develop Neural Networks to predict preconditioners for the Dirac operator with the gauge configuration as input.
Our approach closely follows recent successes from MIT[1], where traditional neural networks have been used to learn preconditioners by optimizing a differentiable loss function. We plan to first set up similar neural networks for 4D SU(3) gauge configurations.
The primary objective is to then develop a reinforcement learning-based neural network that achieves similar outcomes. These networks can be used to build neural networks in cases where the loss function may not be differentiable, making them a viable option for a broader range of applications.

[1] Y. Sun, S. Eswar, Y. Lin, W. Detmold, P. Shanahan, X. Li, Y. Liu and P. Balaprakash,
``Matrix-free Neural Preconditioner for the Dirac Operator in Lattice Gauge Theory, [arXiv:2509.10378 [hep-lat]].

Parallel Session (for talks only) Algorithms and artificial intelligence

Authors

Ananth Ashish Garg Balasubramanian (Indian Institute of Science) Sudipta Mondal (Indian Institute Of Science) Prasad Hegde (Indian Institute of Science)

Presentation materials

There are no materials yet.