Enhancing the security of AI-driven autonomous systems through adversarially robust deep learning models
1 Department of Electrical Engineering and Computer Science, Ohio University, OH, USA.
2 Department of Mathematics, University of Lagos, Akoka, Lagos, Nigeria.
2 Department of Mathematics, Lamar University, Beaumont, TX, USA.\
Research Article
World Journal of Advanced Research and Reviews, 2023, 20(01), 1336-1351
Publication history:
Received on 13 September 2023; revised on 24 October 2023; accepted on 26 October 2023
Abstract:
Adversarial attacks pose a significant threat to AI-driven autonomous systems by exploiting vulnerabilities in deep learning models, leading to erroneous decision-making in safety-critical applications. This study investigates the effectiveness of adversarial training as a defense mechanism to enhance model robustness against adversarial perturbations. We evaluate multiple deep learning architectures subjected to Fast Gradient Sign Method (FGSM), Projected Gradient Descent (PGD), and Carlini & Wagner (CW) attacks, comparing adversarially trained models with standard models in terms of accuracy, robustness, and computational efficiency. The results demonstrate that adversarial training significantly improves resistance to adversarial attacks, reducing attack success rates by over 50% while maintaining high classification performance. However, a trade-off between robustness and inference time was observed, highlighting computational cost concerns. Furthermore, our findings reveal that adversarial robustness partially transfers across architectures but remains susceptible to advanced optimization-based attacks. This study contributes to the development of more secure AI-driven autonomous systems by identifying strengths and limitations of adversarial training, offering insights into future improvements in adversarial defense strategies.
Keywords:
Adversarial Machine Learning; DL Security; Cybersecurity In AI; Neural Network Vulnerabilities
Full text article in PDF:
Copyright information:
Copyright © 2023 Author(s) retain the copyright of this article. This article is published under the terms of the Creative Commons Attribution Liscense 4.0