CVE-2025-46560: vLLM: DoS via quadratic multimodal tokenizer input
GHSA-vc6m-hm49-g9qg HIGH PoC AVAILABLE CISA: TRACK*Any vLLM deployment running versions 0.8.0–0.8.4 with multimodal capabilities (audio or image inputs) is exposed to unauthenticated denial-of-service. An attacker sending crafted inputs can saturate CPU/memory and take down your inference endpoint with zero privileges. Upgrade to vLLM 0.8.5 immediately; if delayed, rate-limit or disable multimodal endpoints at the API gateway level.
Risk Assessment
High operational risk for production AI serving infrastructure. The vulnerability requires no authentication, no user interaction, and is exploitable over the network with low complexity — a CVSS 7.5 profile that maps to a reliable DoS primitive. EPSS (0.57%) suggests limited exploitation in the wild as of publication, but vLLM's widespread adoption in enterprise LLM serving makes it a high-value target. The quadratic complexity means even moderate-sized crafted inputs can produce disproportionate resource consumption, making resource-level rate limiting insufficient on its own.
Affected Systems
Severity & Risk
Attack Surface
Recommended Action
5 steps-
PATCH
Upgrade vLLM to >= 0.8.5 — this is the only complete fix.
-
WORKAROUND (if upgrade blocked): Enforce hard limits on multimodal input token count at the API gateway or load balancer layer before requests reach vLLM; reject inputs with excessive placeholder token sequences.
-
NETWORK CONTROL
If multimodal endpoints are not required for your workload, disable them or restrict access to authenticated internal networks only.
-
DETECTION
Monitor CPU/memory spikes on inference nodes correlated with multimodal input requests; alert on sustained processing times > baseline for tokenization phase.
-
VERIFY
Confirm vLLM version with
pip show vllmon all inference nodes, including containerized deployments and k8s pods.
CISA SSVC Assessment
Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2025-46560?
Any vLLM deployment running versions 0.8.0–0.8.4 with multimodal capabilities (audio or image inputs) is exposed to unauthenticated denial-of-service. An attacker sending crafted inputs can saturate CPU/memory and take down your inference endpoint with zero privileges. Upgrade to vLLM 0.8.5 immediately; if delayed, rate-limit or disable multimodal endpoints at the API gateway level.
Is CVE-2025-46560 actively exploited?
Proof-of-concept exploit code is publicly available for CVE-2025-46560, increasing the risk of exploitation.
How to fix CVE-2025-46560?
1. PATCH: Upgrade vLLM to >= 0.8.5 — this is the only complete fix. 2. WORKAROUND (if upgrade blocked): Enforce hard limits on multimodal input token count at the API gateway or load balancer layer before requests reach vLLM; reject inputs with excessive placeholder token sequences. 3. NETWORK CONTROL: If multimodal endpoints are not required for your workload, disable them or restrict access to authenticated internal networks only. 4. DETECTION: Monitor CPU/memory spikes on inference nodes correlated with multimodal input requests; alert on sustained processing times > baseline for tokenization phase. 5. VERIFY: Confirm vLLM version with `pip show vllm` on all inference nodes, including containerized deployments and k8s pods.
What systems are affected by CVE-2025-46560?
This vulnerability affects the following AI/ML architecture patterns: LLM inference serving, multimodal AI pipelines, model serving APIs, multi-tenant LLM platforms, agent frameworks using vLLM as backend.
What is the CVSS score for CVE-2025-46560?
CVE-2025-46560 has a CVSS v3.1 base score of 7.5 (HIGH). The EPSS exploitation probability is 0.57%.
Technical Details
NVD Description
vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Versions starting from 0.8.0 and prior to 0.8.5 are affected by a critical performance vulnerability in the input preprocessing logic of the multimodal tokenizer. The code dynamically replaces placeholder tokens (e.g., <|audio_|>, <|image_|>) with repeated tokens based on precomputed lengths. Due to inefficient list concatenation operations, the algorithm exhibits quadratic time complexity (O(n²)), allowing malicious actors to trigger resource exhaustion via specially crafted inputs. This issue has been patched in version 0.8.5.
Exploitation Scenario
An adversary identifies a public-facing LLM API powered by vLLM 0.8.x (discoverable via model metadata endpoints, HTTP headers, or open-source deployment docs). They craft a multimodal request containing an abnormally large sequence of image/audio placeholder tokens (e.g., hundreds of <|image_1|> tokens) and submit it to the inference endpoint. The tokenizer's quadratic list concatenation causes processing time to explode — what should take milliseconds takes seconds or minutes — exhausting CPU and memory on the inference worker. The attacker sends a modest volume of such requests concurrently, causing the serving process to stall or OOM-crash. In a Kubernetes deployment this may trigger cascading pod restarts; in a bare-metal deployment it takes the inference service offline. No credentials, no prior access, no AI/ML expertise required beyond knowing the placeholder token format, which is documented in the vLLM public codebase.
Weaknesses (CWE)
CVSS Vector
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H References
- github.com/advisories/GHSA-vc6m-hm49-g9qg
- nvd.nist.gov/vuln/detail/CVE-2025-46560
- github.com/vllm-project/vllm/blob/8cac35ba435906fb7eb07e44fe1a8c26e8744f4e/vllm/model_executor/models/phi4mm.py Product
- github.com/vllm-project/vllm/security/advisories/GHSA-vc6m-hm49-g9qg Exploit Vendor
- github.com/fkie-cad/nvd-json-data-feeds Exploit
Timeline
Related Vulnerabilities
CVE-2024-9053 9.8 vllm: RCE via unsafe pickle deserialization in RPC server
Same package: vllm CVE-2024-11041 9.8 vllm: RCE via unsafe pickle deserialization in MessageQueue
Same package: vllm CVE-2026-25960 9.8 vllm: SSRF allows internal network access
Same package: vllm CVE-2025-47277 9.8 vLLM: RCE via exposed TCPStore in distributed inference
Same package: vllm CVE-2025-32444 9.8 vLLM: RCE via pickle deserialization on ZeroMQ
Same package: vllm
AI Threat Alert