Attack Type
Adversarial Examples
Adversarial examples are carefully crafted inputs designed to cause AI models to misclassify or produce incorrect outputs while appearing normal to humans. They exploit mathematical properties of neural networks.
5
Total CVEs
1
Pages
Page 1 of 1
Current
| Severity | CVE | Headline | Package | CVSS |
|---|---|---|---|---|
| LOW | CVE-2025-2149 | PyTorch: improper init in quantized sigmoid skews model output | pytorch | 2.5 |
| MEDIUM | CVE-2025-46148 | PyTorch: PairwiseDistance silent miscalculation, integrity risk | pytorch | 5.3 |
| MEDIUM | CVE-2025-46150 | PyTorch: torch.compile silent output inconsistency | pytorch | 5.3 |
| LOW | CVE-2025-25183 | vLLM: hash collision enables prefix cache poisoning | vllm | 2.6 |
| MEDIUM | CVE-2026-34760 | vLLM: audio downmix mismatch enables adversarial input | 5.9 |
AI Threat Alert