Department of Computer Science and Engineering, INFO Institute of Engineering, Kovilpalayam, Coimbatore, India – 641107.
World Journal of Advanced Research and Reviews, 2026, 30(01), 2433-2439
Article DOI: 10.30574/wjarr.2026.30.1.1079
Received on 14 March 2026; revised on 25 April 2026; accepted on 28 April 2026
This paper presents an extended literature survey on deepfake-oriented cyber threats with a strong focus on practical deployment constraints. Although detection accuracy has improved in recent years, real-world adoption remains limited by privacy concerns, hardware requirements, and weak generalization across unseen media conditions. We examine the evolution of deepfake detection from handcrafted forensic cues to deep multimodal architectures, and we discuss why many high-scoring benchmark models fail under operational workloads. The survey highlights a local-first strategy that combines multimodal evidence, interpretable outputs, and resource- aware model design so that robust detection can run on consumer-grade systems. The goal is to support trustworthy, privacy-preserving defense against impersonation fraud, misinformation, and identity abuse in modern digital ecosystems.
Deepfake detection; Cybersecurity; Multimodal learning; Privacy-preserving AI; explainable AI; Edge inference
Preview Article PDF
G. Selvavinayagam, E. Guhan, A. Sankar Raman, R. Vanipriya and S. Vinoth. Privacy-focused artificial intelligence model for detecting deepfake-based cyber threats. World Journal of Advanced Research and Reviews, 2026, 30(01), 2433-2439. Article DOI: https://doi.org/10.30574/wjarr.2026.30.1.1079.