Speakers
Description
Unnormalized probability distributions are fundamental in modelling complex physical systems, especially in lattice field theory. Traditionally, Markov Chain Monte Carlo (MCMC) methods used to study these systems often exhibit slow convergence, critical slowing down, and poor mixing, resulting in highly correlated samples. Machine learning–based sampling approaches, such as normalising flows (NFs), are being extensively used in conjunction with the Metropolis-Hastings algorithm to produce unbiased samples. This talk discusses several shortcomings of NFs and some of our solutions for them. Often, especially in high-dimensional lattices, the target distributions are multi-modal, i.e., they have multiple high-probability regions (also known as modes) separated by low-probability barriers. In NF, reverse KL training leads to mode collapse (learning only a few modes), whereas forward KL training induces mode-covering behaviour (producing samples from low-probability regions). Moreover, forward KL training relies on MCMC-generated samples, which increases the computational costs. We propose two approaches: an adversarial learning approach (AdvNF) and a score-based learning approach (ScoreNF), demonstrating their effectiveness and efficiency on multiple benchmark datasets, including the mixture of Gaussians, the XY model, and the scalar φ⁴ theory.
| Parallel Session (for talks only) | Algorithms and artificial intelligence | 
|---|
