Speaker
Mr
Magdalena Markowicz
(PW)
Description
Understanding and interpreting the decisions made by deep learning models has become an essential area of research in artificial intelligence. Convolutional neural networks (CNNs), despite their high performance in various tasks, often function as "black boxes," making it challenging to explain their predictions. This study focuses on applying and evaluating different explainability techniques to CNN models to gain more insight into their decision-making processes. Using multiple approaches, our aim was to assess the effectiveness and reliability of these methods in improving the transparency and interpretability of neural networks.