GHSA-ggpf-24jw-3fcw: vLLM: RCE via malicious model, PyTorch < 2.6 bypass

GHSA-ggpf-24jw-3fcw CRITICAL
Published April 23, 2025
CISO Take

If your team runs vLLM for LLM inference, treat this as a critical patch now — upgrade to vLLM 0.8.0 which requires PyTorch 2.6.0+. The prior patch (CVE-2025-24357) that added `weights_only=True` is ineffective on PyTorch < 2.6.0, creating dangerous false confidence in teams that already patched. Any GPU inference server loading models from HuggingFace Hub, shared storage, or external sources is at risk of full host compromise via a single malicious model file.

Risk Assessment

Critical risk. CVSS 9.8 with no authentication, no user interaction, and network-accessible attack surface. The compounding risk factor is the false-fix: teams that patched CVE-2025-24357 likely believe they are safe, while remaining fully vulnerable. Default vLLM installations ship with PyTorch 2.5.1 (pinned in requirements.txt), meaning virtually all unupgraded deployments are exposed. LLM inference servers typically run with GPU-attached, high-privilege access in cloud environments, making post-compromise blast radius severe.

Affected Systems

Package Ecosystem Vulnerable Range Patched
vllm pip < 0.8.0 0.8.0
78.9K 126 dependents Pushed 6d ago 56% patched ~32d to patch Full package profile →

Do you use vllm? You're affected.

Severity & Risk

CVSS 3.1
9.8 / 10
EPSS
N/A
Exploitation Status
No known exploitation
Sophistication
Moderate

Attack Surface

AV AC PR UI S C I A
AV Network
AC Low
PR None
UI None
S Unchanged
C High
I High
A High

Recommended Action

6 steps
  1. Immediate: Upgrade vLLM to >= 0.8.0 — this pins PyTorch >= 2.6.0 which contains the proper weights_only=True fix.

  2. If upgrade is blocked: manually upgrade PyTorch to >= 2.6.0 in your environment.

  3. Audit model provenance: inventory all models currently loaded in vLLM deployments and verify they originate from trusted, controlled sources.

  4. Implement model integrity verification: validate cryptographic checksums or signatures before loading any model artifact.

  5. Run vLLM model loading in sandboxed containers with minimal privileges and no network egress to limit RCE blast radius.

  6. Detection: monitor for unexpected outbound connections or process spawning from vLLM worker processes at model load time.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Art.15 - Accuracy, robustness and cybersecurity
ISO 42001
8.4 - AI system security 9.1 - Monitoring, measurement, analysis and evaluation
NIST AI RMF
GOVERN-6.2 - Policies and procedures to address AI risks from third-party entities MANAGE-2.2 - Mechanisms to account for AI risk in third-party interactions
OWASP LLM Top 10
LLM03:2025 - Supply Chain Vulnerabilities

Frequently Asked Questions

What is GHSA-ggpf-24jw-3fcw?

If your team runs vLLM for LLM inference, treat this as a critical patch now — upgrade to vLLM 0.8.0 which requires PyTorch 2.6.0+. The prior patch (CVE-2025-24357) that added `weights_only=True` is ineffective on PyTorch < 2.6.0, creating dangerous false confidence in teams that already patched. Any GPU inference server loading models from HuggingFace Hub, shared storage, or external sources is at risk of full host compromise via a single malicious model file.

Is GHSA-ggpf-24jw-3fcw actively exploited?

No confirmed active exploitation of GHSA-ggpf-24jw-3fcw has been reported, but organizations should still patch proactively.

How to fix GHSA-ggpf-24jw-3fcw?

1. Immediate: Upgrade vLLM to >= 0.8.0 — this pins PyTorch >= 2.6.0 which contains the proper `weights_only=True` fix. 2. If upgrade is blocked: manually upgrade PyTorch to >= 2.6.0 in your environment. 3. Audit model provenance: inventory all models currently loaded in vLLM deployments and verify they originate from trusted, controlled sources. 4. Implement model integrity verification: validate cryptographic checksums or signatures before loading any model artifact. 5. Run vLLM model loading in sandboxed containers with minimal privileges and no network egress to limit RCE blast radius. 6. Detection: monitor for unexpected outbound connections or process spawning from vLLM worker processes at model load time.

What systems are affected by GHSA-ggpf-24jw-3fcw?

This vulnerability affects the following AI/ML architecture patterns: model serving, LLM inference endpoints, ML model deployment pipelines, self-hosted LLM infrastructure, automated model-pulling pipelines.

What is the CVSS score for GHSA-ggpf-24jw-3fcw?

GHSA-ggpf-24jw-3fcw has a CVSS v3.1 base score of 9.8 (CRITICAL).

Technical Details

NVD Description

## Description https://github.com/vllm-project/vllm/security/advisories/GHSA-rh4j-5rhw-hr54 reported a vulnerability where loading a malicious model could result in code execution on the vllm host. The fix applied to specify `weights_only=True` to calls to `torch.load()` did not solve the problem prior to PyTorch 2.6.0. PyTorch has issued a new CVE about this problem: https://github.com/advisories/GHSA-53q9-r3pm-6pq6 This means that versions of vLLM using PyTorch before 2.6.0 are vulnerable to this problem. ## Background Knowledge When users install VLLM according to the official manual ![image](https://github.com/user-attachments/assets/d17e0bdb-26f2-46d6-adf6-0b17e5ddf5c7) But the version of PyTorch is specified in the requirements. txt file ![image](https://github.com/user-attachments/assets/94aad622-ad6d-4741-b772-c342727c58c7) So by default when the user install VLLM, it will install the PyTorch with version 2.5.1 ![image](https://github.com/user-attachments/assets/04ff31b0-aad1-490a-963d-00fda91da47b) In CVE-2025-24357, weights_only=True was used for patching, but we know this is not secure. Because we found that using Weights_only=True in pyTorch before 2.5.1 was unsafe Here, we use this interface to prove that it is not safe. ![image](https://github.com/user-attachments/assets/0d86efcd-2aad-42a2-8ac6-cc96b054c925) ## Fix update PyTorch version to 2.6.0 ## Credit This vulnerability was found By Ji'an Zhou and Li'shuo Song

Exploitation Scenario

An adversary publishes a malicious PyTorch model file (.pt/.pth) to a public registry such as HuggingFace Hub, embedding a Python pickle payload that executes a reverse shell on deserialization. A developer or automated CI/CD pipeline pulls this model and loads it via vLLM running PyTorch 2.5.1 — the default pinned version. Despite `weights_only=True` being set (the CVE-2025-24357 fix), a known gadget chain in PyTorch < 2.6.0 bypasses the restriction. Code executes on the inference host with the privileges of the vLLM process, typically on a GPU server with broad internal network access. The attacker now has a foothold in the ML infrastructure with access to model weights, API keys stored in environment variables, and adjacent cloud services.

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H

Timeline

Published
April 23, 2025
Last Modified
April 23, 2025
First Seen
March 24, 2026

Related Vulnerabilities