ATLAS Landscape
AML.T0018.000
Poison AI Model
Adversaries may manipulate an AI model's weights to change it's behavior or performance, resulting in a poisoned model. Adversaries may poison a model by directly manipulating its weights, training the model on poisoned data, further fine-tuning the model, or otherwise interfering with its training process. The change in behavior of poisoned models may be limited to targeted categories in predictive AI models, or targeted topics, concepts, or facts in generative AI models, or aim for a general performance degradation.
7 CVEs mapped
View on MITRE ATLAS →
| Severity | CVE | Headline | Package | CVSS |
|---|---|---|---|---|
| CRITICAL | CVE-2025-11200 | mlflow: security flaw enables exploitation | mlflow | 9.8 |
| CRITICAL | CVE-2024-35198 | TorchServe: URL bypass enables arbitrary model loading | torchserve | 9.8 |
| CRITICAL | CVE-2025-63389 | ollama: Missing Auth allows unauthenticated access | ollama | 9.8 |
| HIGH | CVE-2023-6831 | MLflow: path traversal allows arbitrary file write | mlflow | 8.1 |
| HIGH | CVE-2021-41220 | TensorFlow: use-after-free in async collective ops | tensorflow | 7.8 |
| HIGH | CVE-2023-6015 | MLflow: unauthenticated arbitrary file write via PUT | mlflow | 7.5 |
| MEDIUM | CVE-2024-3099 | MLflow: URL encoding bypass enables model poisoning | mlflow | 5.4 |
AI Threat Alert