Visualizing Deep Learning Decisions: Grad-CAM-Based Explainable AI for Medical Image Analysis

##plugins.themes.academic_pro.article.main##

Isyaku Uba Haruna
Idyawati Hussein Taraba

Abstract

Regarding the medical image classification task, convolutional neural networks (CNNs) have already achieved good feats and can make the process of disease identification fully automated. Nevertheless, the problem is that these models are operated in a so-called black-box way, which makes it difficult to apply them to healthcare context, where transparency and simplicity of explanation are highly valued. This drawback is addressed in the case study by using an explainable AI method, Gradient-weighted Class Activation Mapping (Grad-CAM), to visualize and interpret the completed a deep CNN model in detecting pneumonia on chest X-rays. The ResNet-50 architecture was fine-tuned with the help of the ChestX-ray14 repository one of the most widespread repositories of approximately 102,000 labeled images used in this kind of study. The model performance was estimated at 93.2% accuracy, 91.8% precision and 94.5% recall, and the area under the curve (AUC) of 0.96 that represents good diagnostic outcomes. The training was done with Grad-CAM to visualize which parts of the X-ray images were the most important during the predictions that the model was making. Based on the observation of the 3D views, it became evident that overall, the identified areas correspond to things defined in clinical examination, such as pulmonary opacities, infiltrates and the numerous types of consolidation that are common in pneumonia. Grad-CAM enabled the clinicians to see and verify if the AI predictions are accurate. Moreover, any errors in the classifier output were located with the assistance of heatmaps, thus, they could be corrected, and the model could be advanced. Hence, Grad-CAM will result in better diagnosis and will assist in transitioning sophisticated AI strategies into practice. Grad-CAM interprets AI decisions into diagrams, allowing doctors to trust the AI diagnosis prompting the further use of deep learning models in hospitals. Due to this case, explainable AI becomes valuable in enhancing transparency, accountability and more informed clinical decision making.

##plugins.themes.academic_pro.article.details##

How to Cite
Isyaku Uba Haruna, & Idyawati Hussein Taraba. (2025). Visualizing Deep Learning Decisions: Grad-CAM-Based Explainable AI for Medical Image Analysis. IIRJET, 11(2). https://doi.org/10.32595/iirjet.org/v11i2.2025.237