CVE-2026-22807

GHSA-2pc9-4j83-qjmr CRITICAL
Published January 21, 2026
CISO Take

If your organization runs vLLM between 0.10.1 and 0.13.x, patch to 0.14.0 immediately — this is a pre-auth RCE that fires at model load, before your WAF or API gateway sees a single packet. Any attacker who can influence which model your vLLM instance loads (via a malicious HuggingFace repo or poisoned local path) can own the inference host with zero friction. Audit your model sourcing pipeline and pin to verified checksums while patching.

Affected Systems

Package Ecosystem Vulnerable Range Patched
vllm pip >= 0.10.1, < 0.14.0 0.14.0
vllm pip No patch

Severity & Risk

CVSS 3.1
9.8 / 10
EPSS
0.0%
chance of exploitation in 30 days
KEV Status
Not in KEV
Sophistication
Trivial

Recommended Action

  1. 1. PATCH: Upgrade vLLM to >= 0.14.0 immediately — this is the only complete fix. 2. INTERIM WORKAROUND: Explicitly set trust_remote_code=False; audit all currently loaded model paths for auto_map keys in config.json files. 3. MODEL PROVENANCE: Implement SHA-256 checksum verification for all HuggingFace model downloads before serving; pin models to specific commit hashes rather than branch or tag references. 4. NETWORK ISOLATION: Route all model downloads through an approved internal registry or proxy — block direct vLLM-to-HuggingFace egress in production. 5. DETECTION: Alert on unexpected outbound connections from vLLM processes at startup; monitor for child processes spawned by vLLM during model load; review auto_map entries in all loaded model configs. 6. SUPPLY CHAIN: Enforce an approved model allowlist in production; prohibit ad-hoc model loading from user-specified paths.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Art.15 - Accuracy, robustness and cybersecurity Article 15 - Accuracy, Robustness and Cybersecurity Article 9 - Risk Management System
ISO 42001
A.6.2.3 - AI Supply Chain Management A.8.4 - AI System Supply Chain Management A.9.5 - AI System Security
NIST AI RMF
GOVERN 6.1 - AI Supply Chain Risk Management GOVERN-1.2 - Accountability Structures for AI Risk MANAGE-2.2 - Mechanisms to Address Identified AI Risks
OWASP LLM Top 10
LLM03 - Supply Chain Vulnerabilities

Technical Details

NVD Description

vLLM is an inference and serving engine for large language models (LLMs). Starting in version 0.10.1 and prior to version 0.14.0, vLLM loads Hugging Face `auto_map` dynamic modules during model resolution without gating on `trust_remote_code`, allowing attacker-controlled Python code in a model repo/path to execute at server startup. An attacker who can influence the model repo/path (local directory or remote Hugging Face repo) can achieve arbitrary code execution on the vLLM host during model load. This happens before any request handling and does not require API access. Version 0.14.0 fixes the issue.

Exploitation Scenario

An adversary creates a malicious model repository on HuggingFace and embeds a reverse shell or credential harvester in the auto_map field of the model's config.json as attacker-controlled Python code. The adversary engineers model adoption via social engineering targeting ML engineers ('try this fine-tuned model'), compromising an upstream model dependency, or gaining write access to a CI/CD pipeline configuration specifying model paths. When the unpatched vLLM instance initializes, it resolves and executes the auto_map module with vLLM process privileges — before serving any requests and before any API-layer security controls engage — achieving full host compromise. No vLLM API credentials are required, only influence over the model path.

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H

Timeline

Published
January 21, 2026
Last Modified
January 30, 2026
First Seen
January 21, 2026