Home
World Journal of Advanced Research and Reviews
International Journal with High Impact Factor for fast publication of Research and Review articles

Main navigation

  • Home
    • Journal Information
    • Editorial Board Members
    • Reviewer Panel
    • Abstracting and Indexing
    • Journal Policies
    • Our CrossMark Policy
    • Publication Ethics
    • Issue in Progress
    • Current Issue
    • Past Issues
    • Instructions for Authors
    • Article processing fee
    • Track Manuscript Status
    • Get Publication Certificate
    • Join Editorial Board
    • Join Reviewer Panel
  • Contact us
  • Downloads

eISSN: 2581-9615 || CODEN: WJARAI || Impact Factor 8.2 ||  CrossRef DOI

Research and review articles are invited for publication in April 2026 (Volume 30, Issue 1) Submit manuscript

Leveraging explainable AI models to improve predictive accuracy and ethical accountability in healthcare diagnostic decision support systems

Breadcrumb

  • Home
  • Leveraging explainable AI models to improve predictive accuracy and ethical accountability in healthcare diagnostic decision support systems

Olufunke A Akande *

Department of Computer Science, Franklin University, USA.
 
Review Article
World Journal of Advanced Research and Reviews, 2020, 08(02), 415-434
Article DOI: 10.30574/wjarr.2020.8.2.0384
DOI url: https://doi.org/10.30574/wjarr.2020.8.2.0384
 
Received on 11 Sepetmber 2020; revised on 25 November 2020; accepted on 28 November 2020
 
Artificial intelligence (AI) has emerged as a transformative force in healthcare, particularly within diagnostic decision support systems (DDSS). However, the integration of black-box predictive models into clinical workflows has raised critical concerns about trust, transparency, and ethical accountability. This study presents a framework for leveraging explainable AI (XAI) models to enhance both predictive accuracy and interpretability in healthcare diagnostics, ensuring that algorithmic outputs are clinically meaningful, ethically sound, and aligned with evidence-based practices. The paper investigates the application of various XAI techniques—including SHAP (Shapley Additive Explanations), LIME (Local Interpretable Model-agnostic Explanations), and attention mechanisms—in improving transparency and clinician trust during disease risk stratification and diagnostic recommendation processes. Through comparative modeling experiments across multimodal datasets (EHRs, imaging, lab reports), the study demonstrates that XAI-enhanced models maintain competitive predictive performance while offering interpretable insights into feature contributions and decision logic. To address ethical accountability, the framework includes a real-time auditing layer for bias detection and sensitivity analysis across subpopulations, ensuring fair outcomes for marginalized or underrepresented groups. Integration with clinical feedback loops allows models to evolve iteratively, aligning predictions with practitioner expertise and patient-centered goals. The system is also designed to support regulatory compliance by generating traceable, explainable decision pathways essential for validation and accountability in healthcare governance. By embedding explainability into model design and deployment, this research bridges the gap between AI-driven prediction and ethical, informed clinical judgment. It provides a roadmap for the responsible adoption of AI in healthcare, where transparency, fairness, and trust are as critical as technical performance. 
 
Explainable AI; Healthcare Diagnostics; Ethical Accountability; Decision Support Systems; Interpretability; Clinical Trust
 
https://wjarr.com/sites/default/files/fulltext_pdf/WJARR-2020-0384.pdf

Preview Article PDF

Olufunke A Akande. Leveraging explainable AI models to improve predictive accuracy and ethical accountability in healthcare diagnostic decision support systems. World Journal of Advanced Research and Reviews, 2020, 8(2), 415-434. Article DOI: https://doi.org/10.30574/wjarr.2020.8.2.0384

Copyright © Author(s). All rights reserved. This article is published under the terms of the Creative Commons Attribution 4.0 International License (CC BY 4.0), which permits use, sharing, adaptation, distribution, and reproduction in any medium or format, as long as appropriate credit is given to the original author(s) and source, a link to the license is provided, and any changes made are indicated.


All statements, opinions, and data contained in this publication are solely those of the individual author(s) and contributor(s). The journal, editors, reviewers, and publisher disclaim any responsibility or liability for the content, including accuracy, completeness, or any consequences arising from its use.

Get Certificates

Get Publication Certificate

Download LoA

Check Corssref DOI details

Issue details

Issue Cover Page

Editorial Board

Table of content

Copyright © 2026 World Journal of Advanced Research and Reviews - All rights reserved

Developed & Designed by VS Infosolution