Speaker
Description
With the increasing volume and complexity of data, machine learning (ML) models are becoming indispensable tools for identifying patterns within structured datasets. However, as these models grow in complexity, it becomes challenging to determine whether they are truly learning meaningful relationships or capturing unintended artifacts. This lack of interpretability leads to mistrust, particularly in high-stakes domains.
In clinical settings, prognostic models must be fully explainable to clinicians and patients to ensure confidence in medical decisions. Similarly, in particle physics, ML-based classification models must be demonstrably aligned with known physical principles, such as those described by the Standard Model. This work focuses on the validation and further development of a diagnostic ML model for predicting eye disease progression. Additionally, we explore how these validation methodologies can be adapted for the identification of hadronic tau decays in ATLAS, ensuring robust and interpretable ML applications across disciplines.