ATLAS Landscape
AML.T0024
Exfiltration via AI Inference API
Adversaries may exfiltrate private information via [AI Model Inference API Access](/techniques/AML.T0040). AI Models have been shown leak private information about their training data (e.g. [Infer Training Data Membership](/techniques/AML.T0024.000), [Invert AI Model](/techniques/AML.T0024.001)). The model itself may also be extracted ([Extract AI Model](/techniques/AML.T0024.002)) for the purposes of [AI Intellectual Property Theft](/techniques/AML.T0048.004). Exfiltration of information relating to private training data raises privacy concerns. Private training data may include personally identifiable information, or other protected data.
5 CVEs mapped
View on MITRE ATLAS →
| Severity | CVE | Headline | Package | CVSS |
|---|---|---|---|---|
| CRITICAL | CVE-2020-15196 | TensorFlow: heap OOB read in sparse/ragged count ops | tensorflow | 9.9 |
| MEDIUM | CVE-2025-7780 | WordPress AI Engine: SSRF leaks files via OpenAI API | 6.5 | |
| MEDIUM | CVE-2026-7141 | vllm: uninitialized KV cache memory leaks inference data | vllm | 5.6 |
| MEDIUM | CVE-2020-15201 | TensorFlow: heap overflow in ragged tensor ops | tensorflow | 4.8 |
| LOW | CVE-2025-1953 | vLLM AIBrix: weak hash in prefix cache leaks inference patterns | 2.6 |
AI Threat Alert