ATLAS Landscape
AML.T0015
Evade AI Model
Adversaries can [Craft Adversarial Data](/techniques/AML.T0043) that prevents an AI model from correctly identifying the contents of the data or [Generate Deepfakes](/techniques/AML.T0088) that fools an AI model expecting authentic data. This technique can be used to evade a downstream task where AI is utilized. The adversary may evade AI-based virus/malware detection or network scanning towards the goal of a traditional cyber attack. AI model evasion through deepfake generation may also provide initial access to systems that use AI-based biometric authentication.
3 CVEs mapped
View on MITRE ATLAS →
| Severity | CVE | Headline | Package | CVSS |
|---|---|---|---|---|
| MEDIUM | CVE-2026-34760 | vLLM: audio downmix mismatch enables adversarial input | 5.9 | |
| MEDIUM | CVE-2025-46150 | PyTorch: torch.compile silent output inconsistency | pytorch | 5.3 |
| MEDIUM | CVE-2025-46148 | PyTorch: PairwiseDistance silent miscalculation, integrity risk | pytorch | 5.3 |
AI Threat Alert