Speaker
Description
Machine-learning-based approaches have become increasingly important in computational physics, particularly for simulations of complex many-body systems. In this context, equivariance provides a natural way to incorporate physical symmetries into models, acting as an inductive bias on the learned probability distributions. While such symmetry constraints are desirable, their direct incorporation into effective models for self-learning Monte Carlo (SLMC) can lead to reduced acceptance rates.
Here, we propose a symmetry-equivariant attention mechanism for SLMC that allows for systematic improvement through increased model depth. The method is tested on the two-dimensional spin–fermion (double-exchange) model. We find that the proposed architecture substantially improves acceptance rates compared to conventional linear models[1]. Furthermore, the performance of the effective model follows a clear scaling behavior, with accuracy improving monotonically as the number of layers increases, reminiscent of trends observed in large-scale language models . These results indicate that deep equivariant architectures provide a promising route toward more efficient and reliable SLMC simulations of complex physical systems.
[1] Y. Nagai and A. Tomiya, J. Phys. Soc. Jpn. 93, 114007 (2024).
| Keyword-1 | Self-learning Monte Carlo |
|---|---|
| Keyword-2 | Equivariant neural networks |
| Keyword-3 | Machine learning in physics |