CVE-2025-46570: vLLM: timing side-channel leaks prompt cache data

GHSA-4qjh-9fv9-r85r LOW
Published May 29, 2025
CISO Take

If you run vLLM in a multi-tenant environment, upgrade to 0.9.0 immediately—PageAttention prefix caching creates measurable TTFT differences that allow authenticated users to infer fragments of other tenants' prompts or system prompts. Single-tenant and isolated deployments carry negligible risk. The patch is available and straightforward; there is no justification for delay.

Risk Assessment

Low overall risk (CVSS 2.6), but contextually significant for multi-tenant LLM serving infrastructure. Exploitation requires network-level timing precision, an authenticated API account, and stable conditions to resolve sub-millisecond differences—making opportunistic attacks unlikely. The realistic threat model is a determined insider or co-tenant with sustained access probing a shared inference endpoint. Single-tenant private deployments behind a private network are not meaningfully affected.

Affected Systems

Package Ecosystem Vulnerable Range Patched
vllm pip No patch
78.9K 126 dependents Pushed 6d ago 56% patched ~32d to patch Full package profile →
vllm pip < 0.9.0 0.9.0
78.9K 126 dependents Pushed 6d ago 56% patched ~32d to patch Full package profile →

Severity & Risk

CVSS 3.1
2.6 / 10
EPSS
0.2%
chance of exploitation in 30 days
Higher than 39% of all CVEs
Exploitation Status
No known exploitation
Sophistication
Advanced

Attack Surface

AV AC PR UI S C I A
AV Network
AC High
PR Low
UI Required
S Unchanged
C Low
I None
A None

Recommended Action

5 steps
  1. Upgrade vLLM to ≥0.9.0 (patch commit 77073c77).

  2. Interim workaround: disable prefix caching (--disable-prefix-caching) at the cost of throughput degradation.

  3. For multi-tenant deployments, enforce strict tenant isolation at the serving layer—separate vLLM instances per tenant eliminates the shared cache.

  4. Restrict inference API access to authenticated, authorized clients only to reduce the attacker pool.

  5. Monitor TTFT distributions per client for anomalous bimodal patterns that may indicate systematic timing probing.

CISA SSVC Assessment

Decision Track
Exploitation none
Automatable No
Technical Impact partial

Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Art.9 - Risk management system
ISO 42001
A.7.4 - Data privacy and protection in AI systems
NIST AI RMF
MEASURE 2.5 - Privacy risks are identified and monitored
OWASP LLM Top 10
LLM02 - Sensitive Information Disclosure

Frequently Asked Questions

What is CVE-2025-46570?

If you run vLLM in a multi-tenant environment, upgrade to 0.9.0 immediately—PageAttention prefix caching creates measurable TTFT differences that allow authenticated users to infer fragments of other tenants' prompts or system prompts. Single-tenant and isolated deployments carry negligible risk. The patch is available and straightforward; there is no justification for delay.

Is CVE-2025-46570 actively exploited?

No confirmed active exploitation of CVE-2025-46570 has been reported, but organizations should still patch proactively.

How to fix CVE-2025-46570?

1. Upgrade vLLM to ≥0.9.0 (patch commit 77073c77). 2. Interim workaround: disable prefix caching (--disable-prefix-caching) at the cost of throughput degradation. 3. For multi-tenant deployments, enforce strict tenant isolation at the serving layer—separate vLLM instances per tenant eliminates the shared cache. 4. Restrict inference API access to authenticated, authorized clients only to reduce the attacker pool. 5. Monitor TTFT distributions per client for anomalous bimodal patterns that may indicate systematic timing probing.

What systems are affected by CVE-2025-46570?

This vulnerability affects the following AI/ML architecture patterns: Multi-tenant LLM inference serving, Shared inference infrastructure, API gateway with cached LLM backends, LLM-as-a-service platforms.

What is the CVSS score for CVE-2025-46570?

CVE-2025-46570 has a CVSS v3.1 base score of 2.6 (LOW). The EPSS exploitation probability is 0.18%.

Technical Details

NVD Description

vLLM is an inference and serving engine for large language models (LLMs). Prior to version 0.9.0, when a new prompt is processed, if the PageAttention mechanism finds a matching prefix chunk, the prefill process speeds up, which is reflected in the TTFT (Time to First Token). These timing differences caused by matching chunks are significant enough to be recognized and exploited. This issue has been patched in version 0.9.0.

Exploitation Scenario

An adversary with a valid API account on a shared vLLM endpoint systematically submits prompts sharing prefixes with suspected system prompts or other users' conversation starters. Cache hit = fast TTFT, cache miss = slow TTFT. By performing a binary search over the prompt space across many requests, the attacker reconstructs cached prefix fragments character-by-character or token-by-token. The attack is noisy, requires stable network conditions to resolve timing differences reliably, and leaves a distinctive high-volume query pattern in access logs—but a patient attacker in a low-noise environment can extract meaningful fragments of proprietary or sensitive prompts.

CVSS Vector

CVSS:3.1/AV:N/AC:H/PR:L/UI:R/S:U/C:L/I:N/A:N

Timeline

Published
May 29, 2025
Last Modified
June 27, 2025
First Seen
May 29, 2025

Related Vulnerabilities