Speaker
Jaeok Yi
(KAIST)
Description
Theoretical understanding of deep learning remains elusive despite its empirical success. In this study, we propose a novel "synaptic field theory" that describes the training dynamics of synaptic weights and biases in the continuum limit. Unlike previous approaches, our framework treats synaptic weights and biases as fields and interprets their indices as spatial coordinates, with the training data acting as external sources. This perspective offers new insights into the fundamental mechanisms of deep learning and suggests a pathway for leveraging well-established field-theoretic techniques to study neural network training.