Explainable Artificial Intelligence (XAI) in Healthcare: A Systematic Review of Algorithms, Interpretability Techniques, and Clinical Integration Strategies
DOI:
https://doi.org/10.31838/INES/03.02.16Keywords:
Explainable Artificial Intelligence (XAI); Clinical Decision Support Systems (CDSS); Interpretability Techniques; Healthcare AI; SHAP; LIME; Attention Mechanisms; Deep Learning; Medical Imaging; Human-in-the-Loop AIAbstract
The fact that Artificial Intelligence (AI) is increasingly finding its ways into healthcare has greatly contributed to the diagnostic, prognostic, and forecasting propositions as well as clinical decision-making. Nonetheless, black box in deep learning results and deep learning algorithms in general pose significant challenges to both clinical acceptability as well as regulatory acceptance and patient confidence. Explainable Artificial intelligence (XAI) has come to curb such shortcomings by envisioning interpretability and humanistic comprehension of model decisions. This systematic review intends to relatively or comprehensively examine XAI in healthcare, and its analysis focuses on two dimensions: types of algorithms and interpretability methods, as well as strategies of clinical integration. A total of 112 articles were reviewed consisting of peer-reviewed articles published since 2018 and ending by 2025 which continued to be peer-reviewed till 2025 and then considered in the following databases; PubMed, Scopus, IEEE Xplore, and Web of Science. Papers were grouped by the domain of application (radiology, pathology, genomics, etc.), the type of AI model (decision trees, deep neural networks, etc.), and explanation technique (SHAP, LIME, attentions, etc.). The results indicate that SHAP and attention-based models are common and widely applicable to their compromise between fidelity and usability. Among the key challenges have been mentioned such as accuracy interpretability tradeoff, data bias, absence of standardized evaluation metrics and an insufficient clinical workflow. The conclusion to the review presents a proposed unfolding maturity model of using human-in-the-loop XAI and future research recommendations to include the presence of domain-specific interpretability benchmarks and the regulatory-compliant XAI systems. The presented work will serve as an apt guide to the development of trusted and transparent AI in healthcare.
