Developing and testing methodologies that allow to interpret the predictions of the AI algorithms in terms of transparency, interpretability, and explainability, is today one of the most important open questions in AI. To solve AI explainability requires approaches, knowhow and complementary skills from different fields, essential to be able to understand the behaviour of the AI algorithms. In the seminar after a general introduction to the concept and methods of AI explainability, I will illustrate the scope and some of the preliminary results obtained by the MUCCA project (Multi-disciplinary Use Cases for Convergent new Approaches to AI explainability). In MUCCA an interesting set of use-cases, in which explainable AI can play a crucial role, are used to quantify strengths and highlight, and possibly solve, weaknesses of the available explainable AI methods in different applicative contexts. The use-cases are intentionally chosen to be quite heterogeneous with respect to the types of data, learning tasks, and scientific questions, ranging from high energy physics AI applications, to applied AI in medical imaging, to the design of digital twins for respiratory analysis in functional medicine, to analysis and modelling tasks in neuroscience.