Speaker
Description
This study explores the application of Local Interpretable Model-Agnostic Explanations (LIME) for enhancing the explainability of a Convolutional Neural Network (CNN) used in classifying the condition of drilled holes in melamine-faced chipboard. A VGG16 pretrained network serves as the foundation of our CNN model, which is tasked with classifying the holes based on their wear states. The dataset comprises images captured during drilling experiments using a CNC vertical machining center. The CNN model's overall accuracy is 66.60 %, with performance evaluated through cross-validation. By integrating LIME, we generate visual explanations that elucidate the model's decision-making process, identifying the most influential features in the images. These insights are crucial for validating model performance, debugging, and enhancing trust in the automated tool condition monitoring system. The results demonstrate the potential of combining deep learning with explainable AI techniques to improve transparency and reliability in critical industrial applications.