Independent Researcher, London, Uk.
World Journal of Advanced Research and Reviews, 2025, 26(03), 2811-2821
Article DOI: 10.30574/wjarr.2025.26.3.2491
Received on 19 May 2025; revised on 25 June 2025; accepted on 27 June 2025
The massive digitalization of the communication environment has created fertile ground for increasingly sophisticated social engineering attacks, which exploit human psychology rather than technical flaws. Online criminal activity: Today, fraudsters hijack real-time conversations across voice calls, chat platforms, and emails to trick individuals and organizations into divulging private information or approving fraudulent transactions. Traditional fraud detection strategies, which rely mostly on static keyword-based heuristics or predefined rule-based detection, have proven to be less effective against these dynamic and adaptive threats. Therefore, in this paper, we propose a real-time fraud detection model based on fine-tuned large language models (LLMs) to bridge this gap. Unlike conventional systems, the proposed architecture leverages deep contextual understanding, semantic reasoning, and intent classification to identify suspicious interactions in live communication environments.
The system integrates several key components: a speech-to-text transcription pipeline for converting voice calls into structured text; a retrieval-augmented generation (RAG) mechanism that incorporates organizational policies and domain-specific knowledge into decision-making; and a feedback loop enabling continuous adaptation to novel fraud strategies. In addition, the framework utilizes a scenario generator to augment the datasets, generating contrastive benign versus malicious dialogue as a means to enhance model robustness. Trained with LoRA and quantization techniques for efficiency reasons, the model performs well on controlled evaluations, reaching over 97% accuracy in predicting the intent of fraudulent messages within three conversational turns.
Real-world deployment results demonstrate tangible improvements in incidents related to fraud, enhanced decision support for analysts utilizing outputs from explainable AI, and increased flexibility in responding to new threats. In addition to advancing the technical state of fraud detection research, this work also contributes to the broader research efforts on cybersecurity resiliency by demonstrating the feasibility of operationalizing LLMs for high-stakes real-time applications.
Real-Time Fraud Detection; Large Language Models (LLMs); Social Engineering Attacks; Context-Aware Cybersecurity; Retrieval-Augmented Generation (RAG); Explainable Artificial Intelligence (XAI)
Preview Article PDF
Irhimefe Otuburun. Real-Time Fraud Detection Using Large Language Models: A Context-Aware System for Mitigating Social Engineering Threats. World Journal of Advanced Research and Reviews, 2025, 26(3), 2811-2821. Article DOI: https://doi.org/10.30574/wjarr.2025.26.3.2491