ATLAS Landscape
AML.T0020

Poison Training Data

Adversaries may attempt to poison datasets used by an AI model by modifying the underlying data or its labels. This allows the adversary to embed vulnerabilities in AI models trained on the data that may not be easily detectable. Data poisoning attacks may or may not require modifying the labels. The embedded vulnerability is activated at a later time by data samples with an [Insert Backdoor Trigger](/techniques/AML.T0043.004) Poisoned data can be introduced via [AI Supply Chain Compromise](/techniques/AML.T0010) or the data may be poisoned after the adversary gains [Initial Access](/tactics/AML.TA0004) to the system.

Severity CVE CVSS
CRITICAL CVE-2026-2635 9.8
CRITICAL CVE-2023-6018 9.8
CRITICAL CVE-2025-33244 9.0
HIGH CVE-2021-41220 7.8
HIGH CVE-2025-33233 7.8
HIGH CVE-2024-0452 7.7
HIGH CVE-2023-6015 7.5
HIGH CVE-2025-7647 7.3
HIGH CVE-2025-7707 7.1
MEDIUM CVE-2026-35492 6.5
MEDIUM CVE-2022-23563 6.3
MEDIUM CVE-2025-25296 6.1
MEDIUM CVE-2025-0508 5.9
MEDIUM CVE-2022-29211 5.5
MEDIUM CVE-2025-3044 5.3
MEDIUM CVE-2025-13354 4.3
LOW CVE-2026-7846 2.6
HIGH CVE-2025-47783
HIGH CVE-2026-2472
HIGH CVE-2026-22033
LOW CVE-2025-65858
CRITICAL CVE-2025-34351