CVE-2025-46560: vLLM: DoS via quadratic multimodal tokenizer input

GHSA-vc6m-hm49-g9qg HIGH PoC AVAILABLE CISA: TRACK*
Published April 30, 2025
CISO Take

Any vLLM deployment running versions 0.8.0–0.8.4 with multimodal capabilities (audio or image inputs) is exposed to unauthenticated denial-of-service. An attacker sending crafted inputs can saturate CPU/memory and take down your inference endpoint with zero privileges. Upgrade to vLLM 0.8.5 immediately; if delayed, rate-limit or disable multimodal endpoints at the API gateway level.

Risk Assessment

High operational risk for production AI serving infrastructure. The vulnerability requires no authentication, no user interaction, and is exploitable over the network with low complexity — a CVSS 7.5 profile that maps to a reliable DoS primitive. EPSS (0.57%) suggests limited exploitation in the wild as of publication, but vLLM's widespread adoption in enterprise LLM serving makes it a high-value target. The quadratic complexity means even moderate-sized crafted inputs can produce disproportionate resource consumption, making resource-level rate limiting insufficient on its own.

Affected Systems

Package Ecosystem Vulnerable Range Patched
vllm pip No patch
78.9K 126 dependents Pushed 6d ago 56% patched ~32d to patch Full package profile →
vllm pip >= 0.8.0, < 0.8.5 0.8.5
78.9K 126 dependents Pushed 6d ago 56% patched ~32d to patch Full package profile →

Severity & Risk

CVSS 3.1
7.5 / 10
EPSS
0.6%
chance of exploitation in 30 days
Higher than 69% of all CVEs
Exploitation Status
Exploit Available
Exploitation: MEDIUM
Sophistication
Trivial
Exploitation Confidence
medium
CISA SSVC: Public PoC
Public PoC indexed (trickest/cve)
Composite signal derived from CISA KEV, CISA SSVC, EPSS, trickest/cve, and Nuclei templates.

Attack Surface

AV AC PR UI S C I A
AV Network
AC Low
PR None
UI None
S Unchanged
C None
I None
A High

Recommended Action

5 steps
  1. PATCH

    Upgrade vLLM to >= 0.8.5 — this is the only complete fix.

  2. WORKAROUND (if upgrade blocked): Enforce hard limits on multimodal input token count at the API gateway or load balancer layer before requests reach vLLM; reject inputs with excessive placeholder token sequences.

  3. NETWORK CONTROL

    If multimodal endpoints are not required for your workload, disable them or restrict access to authenticated internal networks only.

  4. DETECTION

    Monitor CPU/memory spikes on inference nodes correlated with multimodal input requests; alert on sustained processing times > baseline for tokenization phase.

  5. VERIFY

    Confirm vLLM version with pip show vllm on all inference nodes, including containerized deployments and k8s pods.

CISA SSVC Assessment

Decision Track*
Exploitation poc
Automatable No
Technical Impact partial

Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Article 15 - Accuracy, robustness and cybersecurity
ISO 42001
A.9.3 - Operational continuity of AI systems
NIST AI RMF
MS-2.5 - Robustness and Reliability — Adversarial Input Handling
OWASP LLM Top 10
LLM04 - Model Denial of Service

Frequently Asked Questions

What is CVE-2025-46560?

Any vLLM deployment running versions 0.8.0–0.8.4 with multimodal capabilities (audio or image inputs) is exposed to unauthenticated denial-of-service. An attacker sending crafted inputs can saturate CPU/memory and take down your inference endpoint with zero privileges. Upgrade to vLLM 0.8.5 immediately; if delayed, rate-limit or disable multimodal endpoints at the API gateway level.

Is CVE-2025-46560 actively exploited?

Proof-of-concept exploit code is publicly available for CVE-2025-46560, increasing the risk of exploitation.

How to fix CVE-2025-46560?

1. PATCH: Upgrade vLLM to >= 0.8.5 — this is the only complete fix. 2. WORKAROUND (if upgrade blocked): Enforce hard limits on multimodal input token count at the API gateway or load balancer layer before requests reach vLLM; reject inputs with excessive placeholder token sequences. 3. NETWORK CONTROL: If multimodal endpoints are not required for your workload, disable them or restrict access to authenticated internal networks only. 4. DETECTION: Monitor CPU/memory spikes on inference nodes correlated with multimodal input requests; alert on sustained processing times > baseline for tokenization phase. 5. VERIFY: Confirm vLLM version with `pip show vllm` on all inference nodes, including containerized deployments and k8s pods.

What systems are affected by CVE-2025-46560?

This vulnerability affects the following AI/ML architecture patterns: LLM inference serving, multimodal AI pipelines, model serving APIs, multi-tenant LLM platforms, agent frameworks using vLLM as backend.

What is the CVSS score for CVE-2025-46560?

CVE-2025-46560 has a CVSS v3.1 base score of 7.5 (HIGH). The EPSS exploitation probability is 0.57%.

Technical Details

NVD Description

vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Versions starting from 0.8.0 and prior to 0.8.5 are affected by a critical performance vulnerability in the input preprocessing logic of the multimodal tokenizer. The code dynamically replaces placeholder tokens (e.g., <|audio_|>, <|image_|>) with repeated tokens based on precomputed lengths. Due to ​​inefficient list concatenation operations​​, the algorithm exhibits ​​quadratic time complexity (O(n²))​​, allowing malicious actors to trigger resource exhaustion via specially crafted inputs. This issue has been patched in version 0.8.5.

Exploitation Scenario

An adversary identifies a public-facing LLM API powered by vLLM 0.8.x (discoverable via model metadata endpoints, HTTP headers, or open-source deployment docs). They craft a multimodal request containing an abnormally large sequence of image/audio placeholder tokens (e.g., hundreds of <|image_1|> tokens) and submit it to the inference endpoint. The tokenizer's quadratic list concatenation causes processing time to explode — what should take milliseconds takes seconds or minutes — exhausting CPU and memory on the inference worker. The attacker sends a modest volume of such requests concurrently, causing the serving process to stall or OOM-crash. In a Kubernetes deployment this may trigger cascading pod restarts; in a bare-metal deployment it takes the inference service offline. No credentials, no prior access, no AI/ML expertise required beyond knowing the placeholder token format, which is documented in the vLLM public codebase.

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H

Timeline

Published
April 30, 2025
Last Modified
May 28, 2025
First Seen
April 30, 2025

Related Vulnerabilities