1 Clinical Associate Professor, ULAN.
2 Primi Dona Magni Research Lab, Awka, Nigeria.
World Journal of Advanced Research and Reviews, 2026, 29(03), 845-855
Article DOI: 10.30574/wjarr.2026.29.3.0138
Received on 10 January 2026; revised on 26 February 2026; accepted on 28 February 2026
Background: Demand pressures in primary care—rising patient volumes, workforce shortages, and administrative overload—have intensified interest in artificial intelligence (AI) tools that can support access, triage, and workflow efficiency. AI chatbots, particularly those using natural language processing and machine learning, are increasingly positioned as scalable interfaces for symptom assessment, patient navigation, and documentation support; however, concerns persist regarding clinical accuracy, safety, bias, privacy, and the patient experience.
Objective: This systematic review evaluated the effectiveness of AI-powered chatbots for primary care triage compared with conventional clinician-led triage, focusing on accuracy, efficiency, and patient satisfaction/usability, and summarised key implementation risks and governance considerations.
Methods: A PRISMA 2020-informed systematic search of PubMed, PubMed Central, ResearchGate, and Google Scholar was conducted for studies published from 2015 to the present. Screening and selection were guided by a PICO framework (primary care triage populations; AI chatbot interventions; clinician/standard triage comparators; outcomes of accuracy, efficiency, satisfaction, and utilisation). Narrative synthesis was used to integrate findings, and methodological quality was appraised using the Critical Appraisal Skills Programme (CASP) checklists.
Results: After deduplication and staged screening, eight studies met inclusion criteria, comprising surveys, comparative evaluations, narrative and scoping reviews, and health-services analyses. Across included evidence, chatbots demonstrated potential efficiency gains, particularly in administrative support and rapid generation of clinical documentation, with one comparative study reporting substantially faster discharge summary production than clinicians while achieving broadly similar scoring on quality/accuracy metrics. In symptom-related triage contexts, performance varied by model and clinical complexity; comparative assessment in ophthalmology triage reported higher expert-rated accuracy and clarity for ChatGPT relative to another large language model, while narrative syntheses consistently highlighted limitations in complex reasoning, inconsistent handling of nuanced presentations, and lack of access to non-verbal cues—factors central to safe primary care triage. [2–4] Studies also raised recurring concerns regarding hallucinations, equity-related harms driven by training data limitations, information governance, and patient trust—suggesting that chatbot triage is best deployed as clinician-supervised decision support rather than a replacement for professional assessment.
Conclusion: AI chatbots show promise for improving primary care triage efficiency and supporting administrative workflows, but current evidence indicates variable diagnostic/triage accuracy and unresolved challenges in safety, equity, privacy, and patient experience. Responsible integration should prioritise clinically validated use-cases, transparent governance, human oversight, and continuous evaluation using patient-centred outcomes and real-world safety monitoring. [1–4,6,8]
Artificial intelligence; Chatbots; Primary care; Triage; Symptom assessment; Natural language processing; Patient satisfaction.
Preview Article PDF
Michael Ajemba and Mgbeahuru Mgbedikearu. AI Chatbots for Primary Care Triage: A systematic review. World Journal of Advanced Research and Reviews, 2026, 29(03), 845-855. Article DOI: https://doi.org/10.30574/wjarr.2026.29.3.0138.