1 Department of Information Technology, Washington University of Science and Technology, Alexandria, VA-22314, USA.
2 Department of MBA, Ashland University, Ashland, OH 44805.
3 Department of Business Administration- Business Analytics (Major) Wilmington University New Castle DE 19720 USA.
World Journal of Advanced Research and Reviews, 2025, 26(03), 2804-2810
Article DOI: 10.30574/wjarr.2025.26.3.2203
Received on 19 May 2025; revised on 26 June 2025; accepted on 28 June 2025
The introduction of artificial intelligence (AI) to healthcare has had a rapid rise, but concerns about transparency, bias, and clinician trust affect the sustainable introduction of these technologies. This paper presents an implementation of the HTI-1 Decision Support Intervention (DSI) model, in combination with a NIST AI Risk Management Framework (AI RMF) profile, for the purposes of designing and testing usable checklists and transparency artifacts for clinical AI in two health systems. Artifacts included model fact sheet, calibration tracking, and decision-support explanations and structured for compliance to ONC HTI-1/HTI-2 regulations and interoperability standards such as FHIR and TEFCA. Simulation and pilot testing revealed improvements including clinician understanding of alerts, successful detection of calibration drift and 90% pass rate in bias auditing. These results suggest that a combination of regulatory anchors and practical usability tools can be used to operationalize trustworthy AI at point of care.
Clinical AI; Transparency; Hti-1; Nist Ai RMF; Bias Audit; Calibration Drift; Trustworthy AI
Preview Article PDF
Qazi Rubyya Mariam, Ariful Haque Arif, Abdullah Hill Hussain, Munadil Rashaq and S M SHAH RAIHENA. Building trustworthy clinical AI: usable checklists and transparency artifacts tested in real-world health systems. World Journal of Advanced Research and Reviews, 2025, 26(3), 2804-2810. Article DOI: https://doi.org/10.30574/wjarr.2025.26.3.2203