AI Component
Framework
AI/ML frameworks (LangChain, PyTorch, TensorFlow, etc.) are the foundational libraries for building AI applications. Vulnerabilities here have wide blast radius due to high adoption.
1220
Total CVEs
61
Pages
Page 34 of 61
Current
| Severity | CVE | Headline | Package | CVSS |
|---|---|---|---|---|
| UNKNOWN | CVE-2025-14921 | transformers: Deserialization enables RCE | transformers | - |
| UNKNOWN | CVE-2025-14924 | transformers: Deserialization enables RCE | transformers | - |
| UNKNOWN | CVE-2025-14926 | transformers: Code Injection enables RCE | transformers | - |
| UNKNOWN | CVE-2025-14927 | transformers: Code Injection enables RCE | transformers | - |
| UNKNOWN | CVE-2025-14928 | transformers: Code Injection enables RCE | transformers | - |
| UNKNOWN | CVE-2025-14929 | transformers: Deserialization enables RCE | transformers | - |
| UNKNOWN | CVE-2025-14930 | transformers: Deserialization enables RCE | transformers | - |
| HIGH | CVE-2025-33233 | NVIDIA: Code Injection enables RCE | 7.8 | |
| LOW | CVE-2024-4839 | lollms-webui: CSRF allows unauthorized AI service install | lollms-webui | 3.3 |
| HIGH | CVE-2024-8768 | vLLM: unauthenticated DoS via empty completion prompt | 7.5 | |
| LOW | CVE-2025-25183 | vLLM: hash collision enables prefix cache poisoning | vllm | 2.6 |
| LOW | CVE-2025-1953 | vLLM AIBrix: weak hash in prefix cache leaks inference patterns | 2.6 | |
| CRITICAL | CVE-2025-29783 | vLLM: RCE via unsafe deserialization in Mooncake KV | vllm | 9.0 |
| CRITICAL | CVE-2024-11041 | vllm: RCE via unsafe pickle deserialization in MessageQueue | vllm | 9.8 |
| CRITICAL | CVE-2024-9053 | vllm: RCE via unsafe pickle deserialization in RPC server | vllm | 9.8 |
| HIGH | CVE-2025-30202 | vLLM: ZeroMQ socket exposure enables DoS in multi-node | vllm | 7.5 |
| CRITICAL | CVE-2025-32444 | vLLM: RCE via pickle deserialization on ZeroMQ | vllm | 9.8 |
| HIGH | CVE-2025-46560 | vLLM: DoS via quadratic multimodal tokenizer input | vllm | 7.5 |
| HIGH | CVE-2025-30165 | vLLM: pickle RCE in multi-node inference deployments | vllm | 8.0 |
| LOW | CVE-2025-46570 | vLLM: timing side-channel leaks prompt cache data | vllm | 2.6 |
AI Threat Alert