Explainable AI (XAI) in healthcare: Enhancing trust and transparency in critical decision-making
1 General Electric HealthCare, Production Engineer. Noblesville, Indiana, United States.
2 Department of Communication, Northern Illinois University, USA.
3 Financial Analyst, Comprehensive Community Based Rehabilitation in Tanzaia, Tanzania.
Review Article
World Journal of Advanced Research and Reviews, 2024, 23(03), 2647–2658
Publication history:
Received on 14 August 2024; revised on 24 September 2024; accepted on 26 September 2024
Abstract:
The integration of artificial intelligence (AI) in healthcare is revolutionizing diagnostic and treatment procedures, offering unprecedented accuracy and efficiency. However, the opacity of many advanced AI models, often described as "black boxes," creates challenges in adoption due to concerns around trust, transparency, and interpretability, particularly in high-stakes environments like healthcare. Explainable AI (XAI) addresses these concerns by providing a framework that not only achieves high performance but also offers insight into how decisions are made. This research explores the application of XAI techniques in healthcare, focusing on critical areas such as disease diagnostics, predictive analytics, and personalized treatment recommendations. The study will analyse various XAI methods, including model-agnostic approaches (LIME, SHAP), interpretable deep learning models, and domain-specific applications of XAI. It also evaluates the ethical implications, such as accountability and bias mitigation, and how XAI can foster collaboration between clinicians and AI systems. Ultimately, the goal is to create AI systems that are both powerful and trustworthy, promoting broader adoption in the healthcare sector while ensuring ethical and safe outcomes for patients.
Keywords:
Explainable AI; Healthcare AI; Model Interpretability; Transparent Decision-Making; Predictive Analytics; Ethical AI Systems.
Full text article in PDF:
Copyright information:
Copyright © 2024 Author(s) retain the copyright of this article. This article is published under the terms of the Creative Commons Attribution Liscense 4.0