Explainable AI Models for Medical Signal and Image Interpretation in Healthcare Monitoring Systems

Authors

  • Len Gelman The University of Huddersfield School of Computing and Engineering Queensgate, Huddersfield, HD1 3DH, UK. Author
  • Ricardo Alvarez Professor, University of Zagreb, Croatia. Author

DOI:

https://doi.org/10.17051/NJSIP/01.02.05

Keywords:

Explainable Artificial Intelligence (XAI); Medical Imaging; Biomedical Signal Processing; Healthcare Monitoring Systems; Deep Learning Interpretability; Saliency Maps; Grad-CAM; Clinical Decision Support

Abstract

Artificial Intelligence (AI) has transformed the healthcare monitoring landscape and empowered real-time, real-time, automated interpretation of physiological signals and imagery. Nonetheless, AI has not yet been widely endorsed in healthcare and clinical practice as there is a strict concern over the model of deep learning being deemed as opaque to the extent of it being described as a black box. This is non transparent which is highly questionable in terms of trust, accountability and explainability which are very crucial in clinical decision support. This paper describes a generational framework that incorporates Explainable Artificial Intelligence (XAI) procedures into the processing methods of electrocardiogram (ECG) and electroencephalogram (EEG) and magnetic resonance imaging (MRI) data. SHAP, LIME, attention maps, GRAD-CAM, and TCAV are all involved in the combination of our approach that will deliver interpretable information about model predictions. These methods allow clinicians to visualize and understand how particular aspects or pieces of signals affect the diagnostic judgment. We test our framework on three existing benchmark datasets that we obtained publicly MIT-BIH Arrhythmia to test on ECG, PhysioNet EEG to test on neural activity, and BraTS-2021 to test on brain tumor segmentation and find that the proposed models achieve high performance at diagnosing the input and also provides explanations that can be easily understood and are of clinical relevance. The results demonstrate that our explainable models match or outperform baseline classification and segmentation performance, as well as provide visualization of key features that can be used in diagnostics that yields trust in the diagnosis by health care professionals. This will assist in bridging the trade-off between the accuracy and explainability of models, and assist in the development of AI-supported medical systems that are not only good but are accountable. The suggested XAI-powered system improves the understandability of automated healthcare analytics, which makes it applicable to early disease detection, constant patient monitoring, risk stratification, etc. In the long term, the study contributes to the implementation of reliable AI-based technologies in a practical clinical setting, backloading technological processes to the principles of ethics and lawfulness.

Additional Files

Published

2025-03-18

Issue

Section

Articles