GHSA-ggpf-24jw-3fcw: vLLM: RCE via malicious model, PyTorch < 2.6 bypass
GHSA-ggpf-24jw-3fcw CRITICALIf your team runs vLLM for LLM inference, treat this as a critical patch now — upgrade to vLLM 0.8.0 which requires PyTorch 2.6.0+. The prior patch (CVE-2025-24357) that added `weights_only=True` is ineffective on PyTorch < 2.6.0, creating dangerous false confidence in teams that already patched. Any GPU inference server loading models from HuggingFace Hub, shared storage, or external sources is at risk of full host compromise via a single malicious model file.
Risk Assessment
Critical risk. CVSS 9.8 with no authentication, no user interaction, and network-accessible attack surface. The compounding risk factor is the false-fix: teams that patched CVE-2025-24357 likely believe they are safe, while remaining fully vulnerable. Default vLLM installations ship with PyTorch 2.5.1 (pinned in requirements.txt), meaning virtually all unupgraded deployments are exposed. LLM inference servers typically run with GPU-attached, high-privilege access in cloud environments, making post-compromise blast radius severe.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| vllm | pip | < 0.8.0 | 0.8.0 |
Do you use vllm? You're affected.
Severity & Risk
Attack Surface
Recommended Action
6 steps-
Immediate: Upgrade vLLM to >= 0.8.0 — this pins PyTorch >= 2.6.0 which contains the proper
weights_only=Truefix. -
If upgrade is blocked: manually upgrade PyTorch to >= 2.6.0 in your environment.
-
Audit model provenance: inventory all models currently loaded in vLLM deployments and verify they originate from trusted, controlled sources.
-
Implement model integrity verification: validate cryptographic checksums or signatures before loading any model artifact.
-
Run vLLM model loading in sandboxed containers with minimal privileges and no network egress to limit RCE blast radius.
-
Detection: monitor for unexpected outbound connections or process spawning from vLLM worker processes at model load time.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is GHSA-ggpf-24jw-3fcw?
If your team runs vLLM for LLM inference, treat this as a critical patch now — upgrade to vLLM 0.8.0 which requires PyTorch 2.6.0+. The prior patch (CVE-2025-24357) that added `weights_only=True` is ineffective on PyTorch < 2.6.0, creating dangerous false confidence in teams that already patched. Any GPU inference server loading models from HuggingFace Hub, shared storage, or external sources is at risk of full host compromise via a single malicious model file.
Is GHSA-ggpf-24jw-3fcw actively exploited?
No confirmed active exploitation of GHSA-ggpf-24jw-3fcw has been reported, but organizations should still patch proactively.
How to fix GHSA-ggpf-24jw-3fcw?
1. Immediate: Upgrade vLLM to >= 0.8.0 — this pins PyTorch >= 2.6.0 which contains the proper `weights_only=True` fix. 2. If upgrade is blocked: manually upgrade PyTorch to >= 2.6.0 in your environment. 3. Audit model provenance: inventory all models currently loaded in vLLM deployments and verify they originate from trusted, controlled sources. 4. Implement model integrity verification: validate cryptographic checksums or signatures before loading any model artifact. 5. Run vLLM model loading in sandboxed containers with minimal privileges and no network egress to limit RCE blast radius. 6. Detection: monitor for unexpected outbound connections or process spawning from vLLM worker processes at model load time.
What systems are affected by GHSA-ggpf-24jw-3fcw?
This vulnerability affects the following AI/ML architecture patterns: model serving, LLM inference endpoints, ML model deployment pipelines, self-hosted LLM infrastructure, automated model-pulling pipelines.
What is the CVSS score for GHSA-ggpf-24jw-3fcw?
GHSA-ggpf-24jw-3fcw has a CVSS v3.1 base score of 9.8 (CRITICAL).
Technical Details
NVD Description
## Description https://github.com/vllm-project/vllm/security/advisories/GHSA-rh4j-5rhw-hr54 reported a vulnerability where loading a malicious model could result in code execution on the vllm host. The fix applied to specify `weights_only=True` to calls to `torch.load()` did not solve the problem prior to PyTorch 2.6.0. PyTorch has issued a new CVE about this problem: https://github.com/advisories/GHSA-53q9-r3pm-6pq6 This means that versions of vLLM using PyTorch before 2.6.0 are vulnerable to this problem. ## Background Knowledge When users install VLLM according to the official manual  But the version of PyTorch is specified in the requirements. txt file  So by default when the user install VLLM, it will install the PyTorch with version 2.5.1  In CVE-2025-24357, weights_only=True was used for patching, but we know this is not secure. Because we found that using Weights_only=True in pyTorch before 2.5.1 was unsafe Here, we use this interface to prove that it is not safe.  ## Fix update PyTorch version to 2.6.0 ## Credit This vulnerability was found By Ji'an Zhou and Li'shuo Song
Exploitation Scenario
An adversary publishes a malicious PyTorch model file (.pt/.pth) to a public registry such as HuggingFace Hub, embedding a Python pickle payload that executes a reverse shell on deserialization. A developer or automated CI/CD pipeline pulls this model and loads it via vLLM running PyTorch 2.5.1 — the default pinned version. Despite `weights_only=True` being set (the CVE-2025-24357 fix), a known gadget chain in PyTorch < 2.6.0 bypasses the restriction. Code executes on the inference host with the privileges of the vLLM process, typically on a GPU server with broad internal network access. The attacker now has a foothold in the ML infrastructure with access to model weights, API keys stored in environment variables, and adjacent cloud services.
Weaknesses (CWE)
CVSS Vector
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H References
Timeline
Related Vulnerabilities
CVE-2024-9053 9.8 vllm: RCE via unsafe pickle deserialization in RPC server
Same package: vllm CVE-2026-25960 9.8 vllm: SSRF allows internal network access
Same package: vllm CVE-2025-47277 9.8 vLLM: RCE via exposed TCPStore in distributed inference
Same package: vllm CVE-2024-11041 9.8 vllm: RCE via unsafe pickle deserialization in MessageQueue
Same package: vllm CVE-2025-32444 9.8 vLLM: RCE via pickle deserialization on ZeroMQ
Same package: vllm
AI Threat Alert