Home
World Journal of Advanced Research and Reviews
International Journal with High Impact Factor for fast publication of Research and Review articles

Main navigation

  • Home
    • Journal Information
    • Editorial Board Members
    • Reviewer Panel
    • Abstracting and Indexing
    • Journal Policies
    • Our CrossMark Policy
    • Publication Ethics
    • Issue in Progress
    • Current Issue
    • Past Issues
    • Instructions for Authors
    • Article processing fee
    • Track Manuscript Status
    • Get Publication Certificate
    • Join Editorial Board
    • Join Reviewer Panel
  • Contact us
  • Downloads

eISSN: 2581-9615 || CODEN: WJARAI || Impact Factor 8.2 ||  CrossRef DOI

Research and review articles are invited for publication in March 2026 (Volume 29, Issue 3) Submit manuscript

Threat Landscape in Artificial Intelligence Systems: Taxonomy, Attack Vectors and Security Implications

Breadcrumb

  • Home
  • Threat Landscape in Artificial Intelligence Systems: Taxonomy, Attack Vectors and Security Implications

Vishnu Kiran Bollu *

Senior SAP Security and Governance Specialist. 

Research Article

World Journal of Advanced Research and Reviews, 2026, 29(01), 285-294

Article DOI: 10.30574/wjarr.2026.29.1.0007

DOI url: https://doi.org/10.30574/wjarr.2026.29.1.0007

Received on 27 November 2025; revised on 04 January 2026; accepted on 07 January 2026

The rapid integration of Artificial Intelligence (AI) systems across critical sectors such as healthcare, finance, autonomous transportation, and national security has fundamentally altered the global cybersecurity threat landscape. Unlike traditional software systems, AI introduces novel vulnerabilities rooted in data-driven learning, model opacity, and high dimensional decision boundaries. This paper presents a comprehensive analysis of the evolving threat landscape in AI systems, focusing on adversarial machine learning attacks, data poisoning, privacy inference, model extraction, supply-chain vulnerabilities, and emerging risks in generative AI and large language models (LLMs). A structured taxonomy of AI-specific threats is proposed, mapping attack vectors to lifecycle stages and adversary capabilities. The study further evaluates real world attack scenarios, sector specific impacts, and systemic risks arising from interconnected AI ecosystems. The paper concludes by outlining detection strategies, governance considerations, and future research directions necessary to ensure secure, trustworthy, and resilient AI deployments.

AI Security; Adversarial Machine Learning; Data Poisoning; Model Extraction; Privacy Attacks; Large Language Models; Threat Modeling; Cybersecurity

https://wjarr.com/sites/default/files/fulltext_pdf/WJARR-2026-0007.pdf

Preview Article PDF

Vishnu Kiran Bollu. Threat Landscape in Artificial Intelligence Systems: Taxonomy, Attack Vectors and Security Implications. World Journal of Advanced Research and Reviews, 2026, 29(1), 285-294. Article DOI: https://doi.org/10.30574/wjarr.2026.29.1.0007

Copyright © Author(s). All rights reserved. This article is published under the terms of the Creative Commons Attribution 4.0 International License (CC BY 4.0), which permits use, sharing, adaptation, distribution, and reproduction in any medium or format, as long as appropriate credit is given to the original author(s) and source, a link to the license is provided, and any changes made are indicated.


All statements, opinions, and data contained in this publication are solely those of the individual author(s) and contributor(s). The journal, editors, reviewers, and publisher disclaim any responsibility or liability for the content, including accuracy, completeness, or any consequences arising from its use.

Get Certificates

Get Publication Certificate

Download LoA

Check Corssref DOI details

Issue details

Issue Cover Page

Editorial Board

Table of content

Copyright © 2026 World Journal of Advanced Research and Reviews - All rights reserved

Developed & Designed by VS Infosolution