If your organization runs vLLM between 0.10.1 and 0.13.x, patch to 0.14.0 immediately — this is a pre-auth RCE that fires at model load, before your WAF or API gateway sees a single packet. Any attacker who can influence which model your vLLM instance loads (via a malicious HuggingFace repo or poisoned local path) can own the inference host with zero friction. Audit your model sourcing pipeline and pin to verified checksums while patching.
Risk Assessment
CRITICAL. CVSS 9.8 with zero prerequisites — no authentication, no user interaction, network-accessible — makes this as exploitable as it gets theoretically. The low EPSS (0.0002) reflects limited current in-the-wild activity, not the severity of potential impact. Organizations loading models from public HuggingFace repos without strict checksum validation are at highest risk. Blast radius is full host compromise of inference infrastructure, which in AI-heavy environments typically means GPU clusters, proprietary model weights, training data, and lateral movement into adjacent internal services.
Affected Systems
Severity & Risk
Attack Surface
Recommended Action
6 steps-
PATCH
Upgrade vLLM to >= 0.14.0 immediately — this is the only complete fix.
-
INTERIM WORKAROUND
Explicitly set trust_remote_code=False; audit all currently loaded model paths for auto_map keys in config.json files.
-
MODEL PROVENANCE
Implement SHA-256 checksum verification for all HuggingFace model downloads before serving; pin models to specific commit hashes rather than branch or tag references.
-
NETWORK ISOLATION
Route all model downloads through an approved internal registry or proxy — block direct vLLM-to-HuggingFace egress in production.
-
DETECTION
Alert on unexpected outbound connections from vLLM processes at startup; monitor for child processes spawned by vLLM during model load; review auto_map entries in all loaded model configs.
-
SUPPLY CHAIN
Enforce an approved model allowlist in production; prohibit ad-hoc model loading from user-specified paths.
CISA SSVC Assessment
Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2026-22807?
If your organization runs vLLM between 0.10.1 and 0.13.x, patch to 0.14.0 immediately — this is a pre-auth RCE that fires at model load, before your WAF or API gateway sees a single packet. Any attacker who can influence which model your vLLM instance loads (via a malicious HuggingFace repo or poisoned local path) can own the inference host with zero friction. Audit your model sourcing pipeline and pin to verified checksums while patching.
Is CVE-2026-22807 actively exploited?
No confirmed active exploitation of CVE-2026-22807 has been reported, but organizations should still patch proactively.
How to fix CVE-2026-22807?
1. PATCH: Upgrade vLLM to >= 0.14.0 immediately — this is the only complete fix. 2. INTERIM WORKAROUND: Explicitly set trust_remote_code=False; audit all currently loaded model paths for auto_map keys in config.json files. 3. MODEL PROVENANCE: Implement SHA-256 checksum verification for all HuggingFace model downloads before serving; pin models to specific commit hashes rather than branch or tag references. 4. NETWORK ISOLATION: Route all model downloads through an approved internal registry or proxy — block direct vLLM-to-HuggingFace egress in production. 5. DETECTION: Alert on unexpected outbound connections from vLLM processes at startup; monitor for child processes spawned by vLLM during model load; review auto_map entries in all loaded model configs. 6. SUPPLY CHAIN: Enforce an approved model allowlist in production; prohibit ad-hoc model loading from user-specified paths.
What systems are affected by CVE-2026-22807?
This vulnerability affects the following AI/ML architecture patterns: LLM inference serving, model serving, AI/ML CI/CD pipelines, model evaluation pipelines, multi-tenant AI platforms.
What is the CVSS score for CVE-2026-22807?
CVE-2026-22807 has a CVSS v3.1 base score of 9.8 (CRITICAL). The EPSS exploitation probability is 0.02%.
Technical Details
NVD Description
vLLM is an inference and serving engine for large language models (LLMs). Starting in version 0.10.1 and prior to version 0.14.0, vLLM loads Hugging Face `auto_map` dynamic modules during model resolution without gating on `trust_remote_code`, allowing attacker-controlled Python code in a model repo/path to execute at server startup. An attacker who can influence the model repo/path (local directory or remote Hugging Face repo) can achieve arbitrary code execution on the vLLM host during model load. This happens before any request handling and does not require API access. Version 0.14.0 fixes the issue.
Exploitation Scenario
An adversary creates a malicious model repository on HuggingFace and embeds a reverse shell or credential harvester in the auto_map field of the model's config.json as attacker-controlled Python code. The adversary engineers model adoption via social engineering targeting ML engineers ('try this fine-tuned model'), compromising an upstream model dependency, or gaining write access to a CI/CD pipeline configuration specifying model paths. When the unpatched vLLM instance initializes, it resolves and executes the auto_map module with vLLM process privileges — before serving any requests and before any API-layer security controls engage — achieving full host compromise. No vLLM API credentials are required, only influence over the model path.
Weaknesses (CWE)
CVSS Vector
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H References
- github.com/advisories/GHSA-2pc9-4j83-qjmr
- nvd.nist.gov/vuln/detail/CVE-2026-22807
- github.com/vllm-project/vllm/commit/78d13ea9de4b1ce5e4d8a5af9738fea71fb024e5 Patch
- github.com/vllm-project/vllm/pull/32194 Issue Patch
- github.com/vllm-project/vllm/releases/tag/v0.14.0 Product Release
- github.com/vllm-project/vllm/security/advisories/GHSA-2pc9-4j83-qjmr Patch Vendor
Timeline
Related Vulnerabilities
CVE-2024-9053 9.8 vllm: RCE via unsafe pickle deserialization in RPC server
Same package: vllm CVE-2026-25960 9.8 vllm: SSRF allows internal network access
Same package: vllm CVE-2025-47277 9.8 vLLM: RCE via exposed TCPStore in distributed inference
Same package: vllm CVE-2024-11041 9.8 vllm: RCE via unsafe pickle deserialization in MessageQueue
Same package: vllm CVE-2025-32444 9.8 vLLM: RCE via pickle deserialization on ZeroMQ
Same package: vllm
AI Threat Alert