Home
World Journal of Advanced Research and Reviews
International Journal with High Impact Factor for fast publication of Research and Review articles

Main navigation

  • Home
    • Journal Information
    • Editorial Board Members
    • Reviewer Panel
    • Abstracting and Indexing
    • Journal Policies
    • Our CrossMark Policy
    • Publication Ethics
    • Issue in Progress
    • Current Issue
    • Past Issues
    • Instructions for Authors
    • Article processing fee
    • Track Manuscript Status
    • Get Publication Certificate
    • Join Editorial Board
    • Join Reviewer Panel
  • Contact us
  • Downloads

eISSN: 2581-9615 || CODEN: WJARAI || Impact Factor 8.2 ||  CrossRef DOI

Research and review articles are invited for publication in March 2026 (Volume 29, Issue 3) Submit manuscript

Robust detection and mitigation strategies against adversarial attacks on AI systems for enhanced cybersecurity resilience

Breadcrumb

  • Home
  • Robust detection and mitigation strategies against adversarial attacks on AI systems for enhanced cybersecurity resilience

Shadrack Onyango Oriaro *

Robert Morris University, School of Data Intelligence & Technology, Pittsburgh, Pennsylvania, USA.

Review Article

World Journal of Advanced Research and Reviews, 2025, 27(03), 165-175

Article DOI: 10.30574/wjarr.2025.27.3.2560

DOI url: https://doi.org/10.30574/wjarr.2025.27.3.2560

Received on 28 May 2025; revised on 24 August 2025; accepted on 28 August 2025

Artificial Intelligence (AI) and machine learning are increasingly employed in security, tracking, self-driving cars, and medical diagnostics. However, a new study reveals that hostile circumstances can fool AI programs. These inputs are intentionally hidden from humans. These attacks allow people to spoof, bypass monitoring systems, and alter their opinions, which is detrimental for security. For safe, dependable, and trustworthy AI processes, adversarial weaknesses must be found and fixed. This study examines the latest methods for finding unreliable samples and building robust defenses. Model predictions, statistical discovery employing data forensic techniques, and confidence scores have been studied as essential ways to find things. Adversarial training, defensive distillation, and group defense design adjustments are discussed as approaches to reduce harm. Baseline datasets and standardized threat models can be used to test different threat detection and protection methods. A framework with powerful protection tactics and recognition algorithms would make AI systems safer online. Best procedures include hostile monitoring and threat sharing. Although much progress has been made, difficulties remain. How to make the system operate with difficult real-life challenges, prove its reliability, and handle changing enemies? AI safety, security, and cyberspace professionals will need to complete many projects to resolve these challenges. This extensive poll offers AI safety guidelines and identifies research gaps. By taking precautions, you can reduce unfriendly machine learning threats. Thus, AI can be applied safely in many places.

Adversarial Attacks; Adversarial Machine Learning; AI Security; Cyber Resilience; Detection; Mitigation; Defenses; Robustness

https://wjarr.com/sites/default/files/fulltext_pdf/WJARR-2025-2560.pdf

Preview Article PDF

Shadrack Onyango Oriaro. Robust detection and mitigation strategies against adversarial attacks on AI systems for enhanced cybersecurity resilience. World Journal of Advanced Research and Reviews, 2025, 27(3), 165-175. Article DOI: https://doi.org/10.30574/wjarr.2025.27.3.2560

Copyright © Author(s). All rights reserved. This article is published under the terms of the Creative Commons Attribution 4.0 International License (CC BY 4.0), which permits use, sharing, adaptation, distribution, and reproduction in any medium or format, as long as appropriate credit is given to the original author(s) and source, a link to the license is provided, and any changes made are indicated.


All statements, opinions, and data contained in this publication are solely those of the individual author(s) and contributor(s). The journal, editors, reviewers, and publisher disclaim any responsibility or liability for the content, including accuracy, completeness, or any consequences arising from its use.

Get Certificates

Get Publication Certificate

Download LoA

Check Corssref DOI details

Issue details

Issue Cover Page

Editorial Board

Table of content

Copyright © 2026 World Journal of Advanced Research and Reviews - All rights reserved

Developed & Designed by VS Infosolution