Securing the AI supply chain: Mitigating vulnerabilities in AI model development and deployment
Independent researcher, Uganda.
Research Article
World Journal of Advanced Research and Reviews, 2024, 22(02), 2336-2346
Article DOI: 10.30574/wjarr.2024.22.2.1394
Publication history:
Received on 27 March 2024; revised on 05 May 2024; accepted on 07 May 2024
Abstract:
The rapid advancement and integration of Artificial Intelligence (AI) across critical sectors — including healthcare, finance, defense, and infrastructure — have exposed an often-overlooked risk: vulnerabilities within the AI supply chain. This research examines the security challenges and potential threats affecting AI model development and deployment, focusing on adversarial attacks, data poisoning, model theft, and compromised third-party components. By dissecting the AI supply chain into its core stages — data sourcing, model training, deployment, and maintenance — this study identifies key entry points for malicious actors.
The paper proposes a multi-layered security framework combining blockchain-based data provenance, federated learning for decentralized model training, and zero-trust architecture to ensure secure deployment.
Additionally, it explores how adversarial training, model watermarking, and real-time anomaly detection can mitigate risks without sacrificing model performance. Case studies of high-profile AI breaches are analyzed to demonstrate the consequences of unsecured pipelines, emphasizing the urgency of securing AI systems.
Keywords:
Artificial Intelligence; AI Model Development; AI Supply Chain; Robust Model Design
Full text article in PDF:
This paper has received Best paper award of Volume 22 - Issue 2 (May 2024)
Copyright information:
Copyright © 2024 Author(s) retain the copyright of this article. This article is published under the terms of the Creative Commons Attribution Liscense 4.0