CVE-2026-25960

GHSA-v359-jj2v-j536 CRITICAL
Published March 9, 2026
CISO Take

CVE-2026-25960 is a critical SSRF bypass in vLLM that allows unauthenticated attackers to pivot from your LLM inference server to internal cloud metadata services, internal APIs, and adjacent infrastructure — with no credentials required. The original CVE-2026-24779 fix is confirmed ineffective due to URL parser inconsistency between urllib3 (validation) and aiohttp/yarl (execution), meaning any organization that patched the original vuln is still exposed. Upgrade to the patched version immediately and treat this as an active incident if your vLLM instances run in cloud environments where IMDS credential theft is a realistic pivot.

Affected Systems

Package Ecosystem Vulnerable Range Patched
vllm pip >= 0.15.1, < 0.17.0 0.17.0
vllm pip No patch
vllm pip No patch
vllm pip No patch

Severity & Risk

CVSS 3.1
9.8 / 10
EPSS
0.0%
chance of exploitation in 30 days
KEV Status
Not in KEV
Sophistication
Trivial

Recommended Action

  1. 1. PATCH: Upgrade vLLM to the version containing commit 6f3b2047abd4a748e3db4a68543f8221358002c0 — verify your version is beyond 0.17.0 where the fix is confirmed. 2. WORKAROUND: If immediate patching is blocked, disable remote URL loading functionality at the application config level and block it at the API gateway layer. 3. NETWORK EGRESS: Implement strict egress filtering on all vLLM hosts — block RFC 1918 ranges (10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16), link-local (169.254.0.0/16), and loopback from outbound HTTP/HTTPS. 4. IMDS HARDENING: Enforce IMDSv2 session tokens (AWS), disable legacy metadata API (GCP), or apply equivalent controls on Azure to limit SSRF-based credential theft even if SSRF is triggered. 5. DETECT: Alert on outbound HTTP from vLLM process to internal IP ranges, metadata service IPs (169.254.169.254, 100.64.0.1), or any non-approved external hosts. 6. AUDIT: If CVE-2026-24779 was previously marked remediated in your vulnerability tracker, reopen it — the fix is bypassed and the risk is unresolved.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Art. 15 - Accuracy, robustness and cybersecurity Article 15 - Accuracy, robustness and cybersecurity
ISO 42001
A.6.2.6 - AI system security Clause 8.4 - AI system operation and monitoring
NIST AI RMF
MANAGE 2.2 - Mechanisms to sustain the value of deployed AI systems are evaluated and applied
OWASP LLM Top 10
LLM05 - Supply Chain Vulnerabilities LLM06 - Sensitive Information Disclosure LLM07 - Insecure Plugin Design

Technical Details

NVD Description

vLLM is an inference and serving engine for large language models (LLMs). The SSRF protection fix for CVE-2026-24779 add in 0.15.1 can be bypassed in the load_from_url_async method due to inconsistent URL parsing behavior between the validation layer and the actual HTTP client. The SSRF fix uses urllib3.util.parse_url() to validate and extract the hostname from user-provided URLs. However, load_from_url_async uses aiohttp for making the actual HTTP requests, and aiohttp internally uses the yarl library for URL parsing. This vulnerability in 0.17.0.

Exploitation Scenario

An adversary identifies an organization running vLLM 0.17.0 via exposed API endpoint fingerprinting or job posting metadata. They craft a URL using URL encoding tricks, IPv6 literals, or protocol-relative notation that passes urllib3.util.parse_url() hostname validation but resolves differently when yarl (aiohttp's internal parser) processes it — specifically redirecting to http://169.254.169.254/latest/meta-data/iam/security-credentials/ on AWS. The request is submitted to the vLLM model loading API endpoint with no authentication. vLLM's load_from_url_async fetches the URL via aiohttp, receives AWS temporary IAM credentials, and the response is returned or logged. The attacker harvests the credentials, enumerates S3 buckets containing model weights and training data, and uses the IAM role to pivot deeper into the organization's AI infrastructure or broader AWS environment.

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H

References

Timeline

Published
March 9, 2026
Last Modified
March 18, 2026
First Seen
March 9, 2026