MITRE ATLAS Attack Landscape
1,550 AI-related CVEs analyzed → 5,003 mappings across 101 ATLAS techniques (each CVE may match multiple techniques).
Executive Summary
The AI attack landscape is dominated by a small set of high-volume techniques. Across 1,604 AI-related CVEs mapped to 170 MITRE ATLAS techniques, Exploit Public-Facing Application (AML.T0049) leads with 1,183 mapped CVEs — reflecting the reality that most AI/ML systems are deployed behind web APIs with insufficient input validation. AI Software (752), Denial of AI Service (473), Unsafe AI Artifacts (285), and Exfiltration via Cyber Means (285) round out the top five.
The concentration is meaningful: the top 5 techniques account for 43.1% of all CVE-to-technique mappings, while the long tail spans more than 165 techniques with much sparser coverage. Security teams can achieve disproportionate risk reduction by focusing detection and response on a small set of attack patterns — rather than spreading resources thin across the full ATLAS matrix.
Key Findings
- Initial Access dominates the tactic ranking with 1,236 unique CVEs — public-facing exposure plus weaknesses in the software stack around models drive this category. Impact (546) and Execution (467) follow.
- AI Software is the second-largest technique with 752 CVEs (AML.T0010.001). This category is essentially the shadow attack surface around the model: deserialization in pickle files, RCE in inference servers, unsafe deserializers in agent frameworks.
- Denial of AI Service is more prevalent than commonly assumed. 473 CVEs target this surface. Most AI incident response plans don't cover availability attacks at all.
- 16 AI CVEs are in CISA's KEV catalog — actively exploited in the wild. They span inference servers (Ollama, vLLM), MLOps platforms (MLflow), and UI frameworks (Gradio).
- 776 AI CVEs (48%) have public exploit code available — almost half of the AI CVE landscape has weaponized PoCs, dramatically shortening the window between disclosure and active exploitation.
- Growth is steady. 266 new AI-related CVEs were added in the last 30 days alone, confirming the threat surface is expanding faster than most security programs adapt.
Trend Analysis
The shift from model-level attacks (adversarial examples, jailbreaks) toward infrastructure-level exploitation marks a maturation of the AI threat landscape. The data is unambiguous: the real attack surface is the software stack around the models — frameworks, APIs, serving infrastructure, data pipelines.
Agent frameworks remain the emerging frontier. As AI systems gain tool-use capabilities (file access, code execution, web browsing), each tool integration becomes a potential attack vector. Agent-related CVEs continue to grow in both volume and severity, with many enabling remote code execution through prompt injection chains that pivot into the underlying tool runtime.
The patching picture is more nuanced than the early "crisis" narrative suggested. Across all AI package CVE associations, 40.3% have a documented fix available — better than initially feared, but still well below the 60-70% rate typical of the broader software ecosystem. The gap between AI tooling and mainstream software security maturity is real but narrowing.
Recommendations
- Prioritize the top 5 ATLAS techniques for detection engineering. Build detection rules specifically for the leading techniques shown above. Together they cover 43.1% of the threat landscape.
- Audit your AI supply chain. Inventory all AI/ML dependencies, check against our package risk scores, and establish a vetting process for new framework adoption. Pay special attention to packages with risk scores above 70 (PyTorch, Ollama, MLflow, Gradio, LiteLLM, LangChain, LangFlow).
- Implement input validation at every AI system boundary. The dominance of "Exploit Public-Facing Application" and "AI Software" mappings means robust input sanitization at API endpoints, model inputs, and agent tool interfaces delivers the highest security ROI.
- Monitor CISA KEV for AI-specific entries. The 16 AI CVEs currently in KEV should be patched within CISA's remediation timelines. Set up automated alerts for new AI KEV additions.
- Plan for AI system availability attacks. Include resource exhaustion, recursive loops, and inference overload in incident response playbooks. Most organizations lack AI-specific DoS detection.
Methodology
This analysis is based on 1,604 AI-related CVEs tracked by AI Threat Alert, mapped to 170 MITRE ATLAS techniques via automated enrichment (Claude AI) and manual validation. Technique frequency reflects the number of distinct CVEs mapped to each technique — a single CVE may map to multiple techniques. Tactic counts reflect distinct CVEs mapped to any technique under that tactic. Data sources include NVD, GitHub Security Advisories, CISA KEV, EPSS, OSV, and vendor advisories. All numeric values in this analysis are pulled live from the database on every page load — they always match the charts and tables below.
Track these techniques against your AI stack with real-time alerts.
Start 14-Day Free Trial
AI Threat Alert