CVE-2025-1953: vLLM AIBrix: weak hash in prefix cache leaks inference patterns
LOWLow-severity cryptographic weakness in AIBrix's prefix cache indexer allows adjacent-network attackers to predict cache keys and infer prompt patterns processed by the LLM inference layer. Exploitation requires existing low-level network access and high complexity, making opportunistic attacks unlikely. Upgrade to AIBrix 0.3.0 immediately if running this component in multi-tenant or shared inference infrastructure.
Risk Assessment
Low practical risk. CVSS 2.6 reflects the limited blast radius: attack requires adjacency to the inference network, low privileges already granted, and high complexity to execute. Impact is confidentiality-only with no integrity or availability degradation. Not in CISA KEV, no known active exploitation. Risk elevates in multi-tenant LLM serving environments where prefix cache could leak cross-tenant prompt patterns.
Severity & Risk
Attack Surface
Recommended Action
5 steps-
Patch
Upgrade AIBrix to v0.3.0 (fixes randomness in prefix cache hash generation per PR #752).
-
Network isolation
Ensure vLLM inference nodes are firewalled to trusted segments only; block lateral access from non-inference workloads.
-
Least privilege
Audit who holds low-level access to inference infrastructure network segments.
-
Detection
Monitor for anomalous cache-related query patterns or repeated hash-probing behavior in gateway logs.
-
Workaround if unpatched
Disable prefix caching in AIBrix config until upgrade is applied.
CISA SSVC Assessment
Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2025-1953?
Low-severity cryptographic weakness in AIBrix's prefix cache indexer allows adjacent-network attackers to predict cache keys and infer prompt patterns processed by the LLM inference layer. Exploitation requires existing low-level network access and high complexity, making opportunistic attacks unlikely. Upgrade to AIBrix 0.3.0 immediately if running this component in multi-tenant or shared inference infrastructure.
Is CVE-2025-1953 actively exploited?
No confirmed active exploitation of CVE-2025-1953 has been reported, but organizations should still patch proactively.
How to fix CVE-2025-1953?
1. **Patch**: Upgrade AIBrix to v0.3.0 (fixes randomness in prefix cache hash generation per PR #752). 2. **Network isolation**: Ensure vLLM inference nodes are firewalled to trusted segments only; block lateral access from non-inference workloads. 3. **Least privilege**: Audit who holds low-level access to inference infrastructure network segments. 4. **Detection**: Monitor for anomalous cache-related query patterns or repeated hash-probing behavior in gateway logs. 5. **Workaround if unpatched**: Disable prefix caching in AIBrix config until upgrade is applied.
What systems are affected by CVE-2025-1953?
This vulnerability affects the following AI/ML architecture patterns: LLM inference infrastructure, model serving, AI gateway/proxy, multi-tenant LLM platforms.
What is the CVSS score for CVE-2025-1953?
CVE-2025-1953 has a CVSS v3.1 base score of 2.6 (LOW). The EPSS exploitation probability is 0.13%.
Technical Details
NVD Description
A vulnerability has been found in vLLM AIBrix 0.2.0 and classified as problematic. Affected by this vulnerability is an unknown functionality of the file pkg/plugins/gateway/prefixcacheindexer/hash.go of the component Prefix Caching. The manipulation leads to insufficiently random values. The complexity of an attack is rather high. The exploitation appears to be difficult. Upgrading to version 0.3.0 is able to address this issue. It is recommended to upgrade the affected component.
Exploitation Scenario
An adversary with low privileges on the same network segment as the AIBrix gateway (e.g., a compromised sidecar container or co-located microservice) probes the prefix cache indexer by sending crafted inference requests. Due to weak randomness in the hash function, they can predict or enumerate cache key collisions, determining which prompt prefixes are actively cached. In a multi-tenant SaaS LLM deployment, this could allow one tenant to infer prompt prefix patterns used by other tenants, leaking system prompt structures or repeated input templates. The high attack complexity means this requires knowledge of the AIBrix caching implementation and controlled network positioning.
CVSS Vector
CVSS:3.1/AV:A/AC:H/PR:L/UI:N/S:U/C:L/I:N/A:N References
Timeline
Related Vulnerabilities
CVE-2025-5120 10.0 smolagents: sandbox escape enables unauthenticated RCE
Same attack type: Data Leakage CVE-2026-33663 10.0 n8n: member role steals plaintext HTTP credentials
Same attack type: Data Leakage CVE-2023-3765 10.0 MLflow: path traversal allows arbitrary file read
Same attack type: Data Leakage CVE-2025-53767 10.0 Azure OpenAI: SSRF EoP, no auth required (CVSS 10)
Same attack type: Privacy Violation CVE-2026-25052 9.9 n8n: security flaw enables exploitation
Same attack type: Data Leakage
AI Threat Alert