ATLAS Landscape
AML.T0018.000

Poison AI Model

Adversaries may manipulate an AI model's weights to change it's behavior or performance, resulting in a poisoned model. Adversaries may poison a model by directly manipulating its weights, training the model on poisoned data, further fine-tuning the model, or otherwise interfering with its training process. The change in behavior of poisoned models may be limited to targeted categories in predictive AI models, or targeted topics, concepts, or facts in generative AI models, or aim for a general performance degradation.

Severity CVE CVSS
CRITICAL CVE-2025-11200 9.8
CRITICAL CVE-2024-35198 9.8
CRITICAL CVE-2025-63389 9.8
HIGH CVE-2023-6831 8.1
HIGH CVE-2021-41220 7.8
HIGH CVE-2023-6015 7.5
MEDIUM CVE-2024-3099 5.4