Speaker
Description
Quantifying tension between different experiments aiming to constrain the same physics is essential for validating our understanding of the world around us. A commonly used metric of tension is the evidence ratio statistic, R, corresponding to the ratio of a joint evidence to the product of individual evidences under a common model. R can be interpreted as the fractional increase in our confidence in a dataset given knowledge of another. While R has been widely adopted as an appropriately Bayesian way of quantifying tensions, it has a non-trivial dependence on the prior that is not always accounted for properly. In this work, we propose using Neural Ratio Estimators (NREs) to calibrate the prior dependence of the R statistic. We show that the output of an NRE corresponds to R if the inputs correspond to data sets from two different experiments. We then show that with an appropriately trained NRE one can derive the distribution of all possible in concordance values of R for two experiments given a model and prior choice. One can then calibrate one's observed R, derived via an independent method such as Nested Sampling, against this distribution to derive a prior independent estimate of tension.