Speaker
Fabian Ruehle
Description
Recently, neural networks have been used to approximate Ricci-flat metrics of Calabi-Yau and G2 manifolds. In these setups, the neural network is the metric (in some choice of local coordinates). One typically starts from a reference metric that is not Ricci-flat and train the neural network to the Ricci-flat metric at the end of training. In this way, training induces a complicated flow in metric space. We discuss this flow, how it compares to other metric flows such as Ricci flow, and implementations of this flow in the so-called neural-tangent kernel (NTK) regime where the neural network becomes infinitely wide. We also propose ways of dealing with the notorious problem of memory costs of NTK methods.