MITRE ATLAS Attack Landscape

1,550 AI-related CVEs analyzed → 5,003 mappings across 101 ATLAS techniques (each CVE may match multiple techniques).

1,550
AI CVEs
101
Techniques
5,003
Total Mappings
Exploit Public-Facing Application
#1 (1183 CVEs)
CISO Analysis Data updated 2026-05-10

Executive Summary

The AI attack landscape is dominated by a small set of high-volume techniques. Across 1,604 AI-related CVEs mapped to 170 MITRE ATLAS techniques, Exploit Public-Facing Application (AML.T0049) leads with 1,183 mapped CVEs — reflecting the reality that most AI/ML systems are deployed behind web APIs with insufficient input validation. AI Software (752), Denial of AI Service (473), Unsafe AI Artifacts (285), and Exfiltration via Cyber Means (285) round out the top five.

The concentration is meaningful: the top 5 techniques account for 43.1% of all CVE-to-technique mappings, while the long tail spans more than 165 techniques with much sparser coverage. Security teams can achieve disproportionate risk reduction by focusing detection and response on a small set of attack patterns — rather than spreading resources thin across the full ATLAS matrix.

Key Findings

  • Initial Access dominates the tactic ranking with 1,236 unique CVEs — public-facing exposure plus weaknesses in the software stack around models drive this category. Impact (546) and Execution (467) follow.
  • AI Software is the second-largest technique with 752 CVEs (AML.T0010.001). This category is essentially the shadow attack surface around the model: deserialization in pickle files, RCE in inference servers, unsafe deserializers in agent frameworks.
  • Denial of AI Service is more prevalent than commonly assumed. 473 CVEs target this surface. Most AI incident response plans don't cover availability attacks at all.
  • 16 AI CVEs are in CISA's KEV catalog — actively exploited in the wild. They span inference servers (Ollama, vLLM), MLOps platforms (MLflow), and UI frameworks (Gradio).
  • 776 AI CVEs (48%) have public exploit code available — almost half of the AI CVE landscape has weaponized PoCs, dramatically shortening the window between disclosure and active exploitation.
  • Growth is steady. 266 new AI-related CVEs were added in the last 30 days alone, confirming the threat surface is expanding faster than most security programs adapt.

Trend Analysis

The shift from model-level attacks (adversarial examples, jailbreaks) toward infrastructure-level exploitation marks a maturation of the AI threat landscape. The data is unambiguous: the real attack surface is the software stack around the models — frameworks, APIs, serving infrastructure, data pipelines.

Agent frameworks remain the emerging frontier. As AI systems gain tool-use capabilities (file access, code execution, web browsing), each tool integration becomes a potential attack vector. Agent-related CVEs continue to grow in both volume and severity, with many enabling remote code execution through prompt injection chains that pivot into the underlying tool runtime.

The patching picture is more nuanced than the early "crisis" narrative suggested. Across all AI package CVE associations, 40.3% have a documented fix available — better than initially feared, but still well below the 60-70% rate typical of the broader software ecosystem. The gap between AI tooling and mainstream software security maturity is real but narrowing.

Recommendations

  1. Prioritize the top 5 ATLAS techniques for detection engineering. Build detection rules specifically for the leading techniques shown above. Together they cover 43.1% of the threat landscape.
  2. Audit your AI supply chain. Inventory all AI/ML dependencies, check against our package risk scores, and establish a vetting process for new framework adoption. Pay special attention to packages with risk scores above 70 (PyTorch, Ollama, MLflow, Gradio, LiteLLM, LangChain, LangFlow).
  3. Implement input validation at every AI system boundary. The dominance of "Exploit Public-Facing Application" and "AI Software" mappings means robust input sanitization at API endpoints, model inputs, and agent tool interfaces delivers the highest security ROI.
  4. Monitor CISA KEV for AI-specific entries. The 16 AI CVEs currently in KEV should be patched within CISA's remediation timelines. Set up automated alerts for new AI KEV additions.
  5. Plan for AI system availability attacks. Include resource exhaustion, recursive loops, and inference overload in incident response playbooks. Most organizations lack AI-specific DoS detection.

Methodology

This analysis is based on 1,604 AI-related CVEs tracked by AI Threat Alert, mapped to 170 MITRE ATLAS techniques via automated enrichment (Claude AI) and manual validation. Technique frequency reflects the number of distinct CVEs mapped to each technique — a single CVE may map to multiple techniques. Tactic counts reflect distinct CVEs mapped to any technique under that tactic. Data sources include NVD, GitHub Security Advisories, CISA KEV, EPSS, OSV, and vendor advisories. All numeric values in this analysis are pulled live from the database on every page load — they always match the charts and tables below.

# Technique CVEs
1 AML.T0049 Exploit Public-Facing Application 1183
2 AML.T0029 Denial of AI Service 473
3 AML.T0025 Exfiltration via Cyber Means 285
4 AML.T0053 AI Agent Tool Invocation 279
5 AML.T0055 Unsecured Credentials 229
6 AML.T0050 Command and Scripting Interpreter 228
7 AML.T0037 Data from Local System 206
8 AML.T0012 Valid Accounts 184
9 AML.T0034 Cost Harvesting 179
10 AML.T0083 Credentials from AI Agent Configuration 149
11 AML.T0040 AI Model Inference API Access 137
12 AML.T0072 Reverse Shell 137
13 AML.T0058 Publish Poisoned Models 106
14 AML.T0035 AI Artifact Collection 99
15 AML.T0107 Exploitation for Defense Evasion 97
16 AML.T0081 Modify AI Agent Configuration 95
17 AML.T0011 User Execution 87
18 AML.T0074 Masquerading 71
19 AML.T0086 Exfiltration via AI Agent Tool Invocation 65
20 AML.T0106 Exploitation for Credential Access 64
21 AML.T0006 Active Scanning 63
22 AML.T0043 Craft Adversarial Data 60
23 AML.T0105 Escape to Host 58
24 AML.T0078 Drive-by Compromise 40
25 AML.T0080 AI Agent Context Poisoning 38
26 AML.T0075 Cloud Service Discovery 32
27 AML.T0085 Data from AI Services 30
28 AML.T0084 Discover AI Agent Configuration 27
29 AML.T0020 Poison Training Data 22
30 AML.T0051 LLM Prompt Injection 21
31 AML.T0057 LLM Data Leakage 20
32 AML.T0007 Discover AI Artifacts 19
33 AML.T0101 Data Destruction via AI Agent Tool Invocation 18
34 AML.T0018 Manipulate AI Model 17
35 AML.T0079 Stage Capabilities 16
36 AML.T0036 Data from Information Repositories 15
37 AML.T0052 Phishing 14
38 AML.T0021 Establish Accounts 13
39 AML.T0031 Erode AI Model Integrity 12
40 AML.T0070 RAG Poisoning 12
41 AML.T0001 Search Open AI Vulnerability Analysis 11
42 AML.T0076 Corrupt AI Model 9
43 AML.T0098 AI Agent Tool Credential Harvesting 9
44 AML.T0110 AI Agent Tool Poisoning 9
45 AML.T0059 Erode Dataset Integrity 8
46 AML.T0102 Generate Malicious Commands 8
47 AML.T0108 AI Agent 8
48 AML.T0044 Full AI Model Access 7
49 AML.T0056 Extract LLM System Prompt 7
50 AML.T0064 Gather RAG-Indexed Targets 7
51 AML.T0093 Prompt Infiltration via Public-Facing Application 7
52 AML.T0099 AI Agent Tool Data Poisoning 7
53 AML.T0073 Impersonation 6
54 AML.T0097 Virtualization/Sandbox Evasion 6
55 AML.T0112 Machine Compromise 6
56 AML.T0024 Exfiltration via AI Inference API 5
57 AML.T0054 LLM Jailbreak 5
58 AML.T0087 Gather Victim Identity Information 5
59 AML.T0096 AI Service API 5
60 AML.T0104 Publish Poisoned AI Agent Tool 5
61 AML.T0019 Publish Poisoned Datasets 4
62 AML.T0063 Discover AI Model Outputs 4
63 AML.T0010 AI Supply Chain Compromise 3
64 AML.T0015 Evade AI Model 3
65 AML.T0065 LLM Prompt Crafting 3
66 AML.T0091 Use Alternate Authentication Material 3
67 AML.T0100 AI Agent Clickbait 3
68 AML.T0109 AI Supply Chain Rug Pull 3
69 AML.T0014 Discover AI Model Family 2
70 AML.T0066 Retrieval Content Crafting 2
71 AML.T0077 LLM Response Rendering 2
72 AML.T0000 Search Open Technical Databases 1
73 AML.T0046 Spamming AI System with Chaff Data 1
74 AML.T0047 AI-Enabled Product or Service 1
75 AML.T0061 LLM Prompt Self-Replication 1
76 AML.T0069 Discover LLM System Information 1
77 AML.T0071 False RAG Entry Injection 1
78 AML.T0092 Manipulate User LLM Chat History 1
79 AML.T0094 Delay Execution of LLM Instructions 1
80 AML.T0002 Acquire Public AI Artifacts 0
81 AML.T0003 Search Victim-Owned Websites 0
82 AML.T0004 Search Application Repositories 0
83 AML.T0005 Create Proxy AI Model 0
84 AML.T0008 Acquire Infrastructure 0
85 AML.T0013 Discover AI Model Ontology 0
86 AML.T0016 Obtain Capabilities 0
87 AML.T0017 Develop Capabilities 0
88 AML.T0041 Physical Environment Access 0
89 AML.T0042 Verify Attack 0
90 AML.T0048 External Harms 0
91 AML.T0060 Publish Hallucinated Entities 0
92 AML.T0062 Discover LLM Hallucinations 0
93 AML.T0067 LLM Trusted Output Components Manipulation 0
94 AML.T0068 LLM Prompt Obfuscation 0
95 AML.T0082 RAG Credential Harvesting 0
96 AML.T0088 Generate Deepfakes 0
97 AML.T0089 Process Discovery 0
98 AML.T0090 OS Credential Dumping 0
99 AML.T0095 Search Open Websites/Domains 0
100 AML.T0103 Deploy AI Agent 0
101 AML.T0111 AI Supply Chain Reputation Inflation 0

Track these techniques against your AI stack with real-time alerts.

Start 14-Day Free Trial