Speaker
Description
Currently, parameter estimation schemes in gravitational wave astronomy fall into one of two wildly divergent categories: matched filtering schemes which rely on resource-intensive numerical modeling of full (strong curvature) general relativity or deep learning neural networks which rely on black box model training. The situation is only slightly improved for source localization algorithms, in which the parameters of interest are only the source and polarization angles of the gravitational wave, but which must run extremely quickly in order to allow electromagnetic follow-up as close as possible to the time of merger. In this case, innovative matched filtering algorithms like Bayestar (Singer and Price, 2016) allow huge performance improvements over classic matched filtering schemes like LAL-Inference (Veitch et al., 2015), while recent versions of deep learning neural networks promise to soon allow source localization of binary neutron star mergers before the time of merger (Baltus et al., 2021, VanStraalen et al., 2024). Even in this reduced parameter space, however, Bayestar remains slower than the neural networks, while the neural networks operate as black boxes, giving no information about how their parameter estimates are made, and therefore when and where they might fail. In this talk, I will give a brief overview of the current state of the art in source localization algorithms -- both matched filter and neural network -- then proceed to outline a strategy that seeks a middle road between the two. By building on my previous source localization work (McClain 2018, McClain 2019) using a combination of empirical signal modeling, physical intuition, and powerful (but fast) numerical methods, I will outline a novel approach to source localization that offers the potential to run as quickly as neural networks while maintaining full control over built-in assumptions (and therefore likely failure modes).