CVE-2026-22807: vllm: Code Injection enables RCE

GHSA-2pc9-4j83-qjmr CRITICAL
Published January 21, 2026
CISO Take

If your organization runs vLLM between 0.10.1 and 0.13.x, patch to 0.14.0 immediately — this is a pre-auth RCE that fires at model load, before your WAF or API gateway sees a single packet. Any attacker who can influence which model your vLLM instance loads (via a malicious HuggingFace repo or poisoned local path) can own the inference host with zero friction. Audit your model sourcing pipeline and pin to verified checksums while patching.

Risk Assessment

CRITICAL. CVSS 9.8 with zero prerequisites — no authentication, no user interaction, network-accessible — makes this as exploitable as it gets theoretically. The low EPSS (0.0002) reflects limited current in-the-wild activity, not the severity of potential impact. Organizations loading models from public HuggingFace repos without strict checksum validation are at highest risk. Blast radius is full host compromise of inference infrastructure, which in AI-heavy environments typically means GPU clusters, proprietary model weights, training data, and lateral movement into adjacent internal services.

Affected Systems

Package Ecosystem Vulnerable Range Patched
vllm pip No patch
78.9K 126 dependents Pushed 6d ago 56% patched ~32d to patch Full package profile →
vllm pip >= 0.10.1, < 0.14.0 0.14.0
78.9K 126 dependents Pushed 6d ago 56% patched ~32d to patch Full package profile →

Severity & Risk

CVSS 3.1
9.8 / 10
EPSS
0.0%
chance of exploitation in 30 days
Higher than 6% of all CVEs
Exploitation Status
No known exploitation
Sophistication
Trivial

Attack Surface

AV AC PR UI S C I A
AV Network
AC Low
PR None
UI None
S Unchanged
C High
I High
A High

Recommended Action

6 steps
  1. PATCH

    Upgrade vLLM to >= 0.14.0 immediately — this is the only complete fix.

  2. INTERIM WORKAROUND

    Explicitly set trust_remote_code=False; audit all currently loaded model paths for auto_map keys in config.json files.

  3. MODEL PROVENANCE

    Implement SHA-256 checksum verification for all HuggingFace model downloads before serving; pin models to specific commit hashes rather than branch or tag references.

  4. NETWORK ISOLATION

    Route all model downloads through an approved internal registry or proxy — block direct vLLM-to-HuggingFace egress in production.

  5. DETECTION

    Alert on unexpected outbound connections from vLLM processes at startup; monitor for child processes spawned by vLLM during model load; review auto_map entries in all loaded model configs.

  6. SUPPLY CHAIN

    Enforce an approved model allowlist in production; prohibit ad-hoc model loading from user-specified paths.

CISA SSVC Assessment

Decision Track
Exploitation none
Automatable No
Technical Impact total

Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Art.15 - Accuracy, robustness and cybersecurity Article 15 - Accuracy, Robustness and Cybersecurity Article 9 - Risk Management System
ISO 42001
A.6.2.3 - AI Supply Chain Management A.8.4 - AI System Supply Chain Management A.9.5 - AI System Security
NIST AI RMF
GOVERN 6.1 - AI Supply Chain Risk Management GOVERN-1.2 - Accountability Structures for AI Risk MANAGE-2.2 - Mechanisms to Address Identified AI Risks
OWASP LLM Top 10
LLM03 - Supply Chain Vulnerabilities

Frequently Asked Questions

What is CVE-2026-22807?

If your organization runs vLLM between 0.10.1 and 0.13.x, patch to 0.14.0 immediately — this is a pre-auth RCE that fires at model load, before your WAF or API gateway sees a single packet. Any attacker who can influence which model your vLLM instance loads (via a malicious HuggingFace repo or poisoned local path) can own the inference host with zero friction. Audit your model sourcing pipeline and pin to verified checksums while patching.

Is CVE-2026-22807 actively exploited?

No confirmed active exploitation of CVE-2026-22807 has been reported, but organizations should still patch proactively.

How to fix CVE-2026-22807?

1. PATCH: Upgrade vLLM to >= 0.14.0 immediately — this is the only complete fix. 2. INTERIM WORKAROUND: Explicitly set trust_remote_code=False; audit all currently loaded model paths for auto_map keys in config.json files. 3. MODEL PROVENANCE: Implement SHA-256 checksum verification for all HuggingFace model downloads before serving; pin models to specific commit hashes rather than branch or tag references. 4. NETWORK ISOLATION: Route all model downloads through an approved internal registry or proxy — block direct vLLM-to-HuggingFace egress in production. 5. DETECTION: Alert on unexpected outbound connections from vLLM processes at startup; monitor for child processes spawned by vLLM during model load; review auto_map entries in all loaded model configs. 6. SUPPLY CHAIN: Enforce an approved model allowlist in production; prohibit ad-hoc model loading from user-specified paths.

What systems are affected by CVE-2026-22807?

This vulnerability affects the following AI/ML architecture patterns: LLM inference serving, model serving, AI/ML CI/CD pipelines, model evaluation pipelines, multi-tenant AI platforms.

What is the CVSS score for CVE-2026-22807?

CVE-2026-22807 has a CVSS v3.1 base score of 9.8 (CRITICAL). The EPSS exploitation probability is 0.02%.

Technical Details

NVD Description

vLLM is an inference and serving engine for large language models (LLMs). Starting in version 0.10.1 and prior to version 0.14.0, vLLM loads Hugging Face `auto_map` dynamic modules during model resolution without gating on `trust_remote_code`, allowing attacker-controlled Python code in a model repo/path to execute at server startup. An attacker who can influence the model repo/path (local directory or remote Hugging Face repo) can achieve arbitrary code execution on the vLLM host during model load. This happens before any request handling and does not require API access. Version 0.14.0 fixes the issue.

Exploitation Scenario

An adversary creates a malicious model repository on HuggingFace and embeds a reverse shell or credential harvester in the auto_map field of the model's config.json as attacker-controlled Python code. The adversary engineers model adoption via social engineering targeting ML engineers ('try this fine-tuned model'), compromising an upstream model dependency, or gaining write access to a CI/CD pipeline configuration specifying model paths. When the unpatched vLLM instance initializes, it resolves and executes the auto_map module with vLLM process privileges — before serving any requests and before any API-layer security controls engage — achieving full host compromise. No vLLM API credentials are required, only influence over the model path.

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H

Timeline

Published
January 21, 2026
Last Modified
January 30, 2026
First Seen
January 21, 2026

Related Vulnerabilities