Explainable Artificial Intelligence (XAI): State-of-the-Art, Challenges, and Research Trends
DOI:
https://doi.org/10.31838/INES/03.02.17Keywords:
Explainable AI, Interpretable Models, Black-box Models, Post-hoc Explanation, Trustworthy AI, Causal Inference, Human-Centric AI, Model TransparencyAbstract
Since the use of artificial intelligence (AI) is becoming more common across high-stakes areas, including healthcare diagnostics, financial decision-making, autonomous vehicles, and legal analytics, the need to increase transparency, interpretability, and accountability in AI decision-making has become critical. Due to the opaqueness of numerous effective machine learning algorithms commonly termed as black boxes, issues related to fairness, trust, bias, and regulatory conformity have increased. Explainable Artificial Intelligence (XAI) has become a major research area aiming at an attempt to interpret the predictions and routes of AI models and do not degrade interpretable performance. The first section of this paper provides a systematic and end-to-end review of the state of the art of XAI, subdividing the existing methods into post-hoc explanation models (e.g. LIME or SHAP), models whose interpretation is intrinsic (e.g. decision trees or rule-based systems), and those with explainability incorporated into the architecture of deep networks (hybrid methods). All of the essential issues relating to XAI are discussed in-depth and these include the model fidelity and interpretability trade-off, the subjectivity of explanations based on human interaction, the absence of evaluative metrics, and computational complexity involved in providing explanations. Moreover, this paper visits new directions, like causal explanations, counterfactual reasoning, a combination with federated learning, and consistency of XAI techniques with ethical AI theories and governance frameworks, such as GDPR and HIPAA. It relies on a systematic review methodology to review pertinent literature in large databases between 2017 and 2025, taking note of some comparative strengths, application areas, and usability issues regarding XAI techniques. The conclusion of the study determines the main gaps in research and the following directions such as creating benchmark datasets, explainability in reinforcement learning, domain-specific evaluation frameworks could be developed. The given paper may be used as an initial source of information by researchers, developers, and policymakers trying to develop AI systems that possess not only accuracy but also interpretablity, alignment with human values and fairness.