vLLM: trust_remote_code bypass enables RCE
If your team runs vLLM 0.10.1–0.17.x with --trust-remote-code=False, that control is silently ignored for certain model sub-components — you have a false sense of security. Any malicious model loaded from an external repository (HuggingFace, S3, etc.) can execute arbitrary code on your inference server. Upgrade to vLLM 0.18.0 immediately; until patched, restrict model loading strictly to internally-hosted, verified artifacts.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| vllm | pip | >= 0.10.1, < 0.18.0 | 0.18.0 |
Do you use vllm? You're affected.
Severity & Risk
Recommended Action
- 1. PATCH: Upgrade to vLLM 0.18.0 — this is the only complete fix. 2. IMMEDIATE WORKAROUND: Restrict model sources to internally-mirrored, verified repositories only. Block direct HuggingFace or external S3 model loading until patched. 3. AUDIT: Inventory all vLLM instances across environments (dev, staging, prod); check version with `pip show vllm`. 4. DETECT: Review recent model load events in vLLM logs for external model sources; flag any models loaded from outside your approved registry. 5. HARDEN: Implement model signing and hash verification before loading any model artifact, regardless of vLLM version. 6. ISOLATE: Run vLLM inference processes with minimum necessary permissions and network segmentation to limit blast radius if exploited.
Classification
Compliance Impact
This CVE is relevant to:
Technical Details
NVD Description
vLLM is an inference and serving engine for large language models (LLMs). Starting in version 0.10.1 and prior to version 0.18.0, two model implementation files hardcode `trust_remote_code=True` when loading sub-components, bypassing the user's explicit `--trust-remote-code=False` security opt-out. This enables remote code execution via malicious model repositories even when the user has explicitly disabled remote code trust. Version 0.18.0 patches the issue.
Exploitation Scenario
An adversary publishes a seemingly legitimate LLM variant on HuggingFace — a fine-tuned model with a slightly modified architecture requiring custom code. The model's custom Python files contain a reverse shell or credential harvester. A data science team evaluating the model runs vLLM with --trust-remote-code=False, confident the security control protects them. During model loading, vLLM internally calls load_pretrained on sub-components using hardcoded trust_remote_code=True, bypassing the user flag. The malicious Python code executes in the context of the inference server process, giving the adversary RCE with access to model weights, inference traffic, internal APIs, and potentially cloud credentials mounted in the environment.
Weaknesses (CWE)
CVSS Vector
CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H References
- github.com/advisories/GHSA-7972-pg2x-xr59
- github.com/advisories/GHSA-7972-pg2x-xr59
- github.com/advisories/GHSA-7972-pg2x-xr59
- github.com/advisories/GHSA-7972-pg2x-xr59
- github.com/advisories/GHSA-7972-pg2x-xr59
- github.com/vllm-project/vllm/commit/00bd08edeee5dd4d4c13277c0114a464011acf72
- github.com/vllm-project/vllm/commit/00bd08edeee5dd4d4c13277c0114a464011acf72
- github.com/vllm-project/vllm/commit/00bd08edeee5dd4d4c13277c0114a464011acf72
- github.com/vllm-project/vllm/commit/00bd08edeee5dd4d4c13277c0114a464011acf72
- github.com/vllm-project/vllm/commit/00bd08edeee5dd4d4c13277c0114a464011acf72
- github.com/vllm-project/vllm/pull/36192
- github.com/vllm-project/vllm/pull/36192
- github.com/vllm-project/vllm/pull/36192
- github.com/vllm-project/vllm/pull/36192
- github.com/vllm-project/vllm/pull/36192
- github.com/vllm-project/vllm/security/advisories/GHSA-7972-pg2x-xr59
- github.com/vllm-project/vllm/security/advisories/GHSA-7972-pg2x-xr59
- github.com/vllm-project/vllm/security/advisories/GHSA-7972-pg2x-xr59
- github.com/vllm-project/vllm/security/advisories/GHSA-7972-pg2x-xr59
- github.com/vllm-project/vllm/security/advisories/GHSA-7972-pg2x-xr59
- nvd.nist.gov/vuln/detail/CVE-2026-27893
- nvd.nist.gov/vuln/detail/CVE-2026-27893
- nvd.nist.gov/vuln/detail/CVE-2026-27893
- nvd.nist.gov/vuln/detail/CVE-2026-27893
- nvd.nist.gov/vuln/detail/CVE-2026-27893
- github.com/vllm-project/vllm/commit/00bd08edeee5dd4d4c13277c0114a464011acf72
- github.com/vllm-project/vllm/commit/00bd08edeee5dd4d4c13277c0114a464011acf72
- github.com/vllm-project/vllm/commit/00bd08edeee5dd4d4c13277c0114a464011acf72
- github.com/vllm-project/vllm/commit/00bd08edeee5dd4d4c13277c0114a464011acf72
- github.com/vllm-project/vllm/pull/36192
- github.com/vllm-project/vllm/pull/36192
- github.com/vllm-project/vllm/pull/36192
- github.com/vllm-project/vllm/pull/36192
- github.com/vllm-project/vllm/security/advisories/GHSA-7972-pg2x-xr59
- github.com/vllm-project/vllm/security/advisories/GHSA-7972-pg2x-xr59
- github.com/vllm-project/vllm/security/advisories/GHSA-7972-pg2x-xr59
- github.com/vllm-project/vllm/security/advisories/GHSA-7972-pg2x-xr59
AI Threat Alert