vLLM's trust_remote_code=False flag is completely ineffective in versions prior to 0.11.1 — attackers can publish a benign-looking model on any public hub (e.g., Hugging Face) that silently executes arbitrary Python on your inference server at load time. If you run vLLM in production, patch to 0.11.1 immediately and audit every model source your pipelines pull from. Until patched, treat every external model load as a potential RCE vector regardless of your trust settings.
Risk Assessment
HIGH. CVSS 8.8 with network-accessible attack path and no privilege requirements makes this broadly exploitable. The critical aggravating factor is the security control bypass: organizations that explicitly set trust_remote_code=False believe they are protected when they are not. vLLM is widely deployed in enterprise LLM serving infrastructure, meaning blast radius is large. EPSS is low (0.00205) at time of publication but exploitability is straightforward once the vulnerability is understood — a malicious model repo is the only infrastructure needed. Not in CISA KEV yet but supply-chain RCE in AI inference engines warrants proactive response.
Affected Systems
Severity & Risk
Attack Surface
Recommended Action
7 steps-
PATCH
Upgrade vLLM to >= 0.11.1 immediately (pip install --upgrade vllm).
-
INTERIM WORKAROUND
Until patched, restrict model sources to an internal registry or a vetted allowlist — do not load arbitrary community models.
-
AUDIT
Review all model sources currently in use; check config.json files for auto_map entries pointing to external repositories.
-
DO NOT TRUST THE FLAG
Explicitly passing trust_remote_code=False is NOT a compensating control in affected versions — remove it from your runbooks as a false safety net.
-
SANDBOX
Run model loading in isolated containers with no network egress to reduce blast radius.
-
DETECT
Monitor for unexpected outbound connections from vLLM processes during model initialization; alert on connections to github.com/huggingface.co from inference hosts that are not part of approved model pull workflows.
-
VERIFY
After patching, confirm your vllm version with pip show vllm.
CISA SSVC Assessment
Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2025-66448?
vLLM's trust_remote_code=False flag is completely ineffective in versions prior to 0.11.1 — attackers can publish a benign-looking model on any public hub (e.g., Hugging Face) that silently executes arbitrary Python on your inference server at load time. If you run vLLM in production, patch to 0.11.1 immediately and audit every model source your pipelines pull from. Until patched, treat every external model load as a potential RCE vector regardless of your trust settings.
Is CVE-2025-66448 actively exploited?
No confirmed active exploitation of CVE-2025-66448 has been reported, but organizations should still patch proactively.
How to fix CVE-2025-66448?
1. PATCH: Upgrade vLLM to >= 0.11.1 immediately (pip install --upgrade vllm). 2. INTERIM WORKAROUND: Until patched, restrict model sources to an internal registry or a vetted allowlist — do not load arbitrary community models. 3. AUDIT: Review all model sources currently in use; check config.json files for auto_map entries pointing to external repositories. 4. DO NOT TRUST THE FLAG: Explicitly passing trust_remote_code=False is NOT a compensating control in affected versions — remove it from your runbooks as a false safety net. 5. SANDBOX: Run model loading in isolated containers with no network egress to reduce blast radius. 6. DETECT: Monitor for unexpected outbound connections from vLLM processes during model initialization; alert on connections to github.com/huggingface.co from inference hosts that are not part of approved model pull workflows. 7. VERIFY: After patching, confirm your vllm version with pip show vllm.
What systems are affected by CVE-2025-66448?
This vulnerability affects the following AI/ML architecture patterns: LLM inference servers, model serving, AI/ML deployment pipelines, model hub integrations, MLOps CI/CD pipelines.
What is the CVSS score for CVE-2025-66448?
CVE-2025-66448 has a CVSS v3.1 base score of 8.8 (HIGH). The EPSS exploitation probability is 0.04%.
Technical Details
NVD Description
vLLM is an inference and serving engine for large language models (LLMs). Prior to 0.11.1, vllm has a critical remote code execution vector in a config class named Nemotron_Nano_VL_Config. When vllm loads a model config that contains an auto_map entry, the config class resolves that mapping with get_class_from_dynamic_module(...) and immediately instantiates the returned class. This fetches and executes Python from the remote repository referenced in the auto_map string. Crucially, this happens even when the caller explicitly sets trust_remote_code=False in vllm.transformers_utils.config.get_config. In practice, an attacker can publish a benign-looking frontend repo whose config.json points via auto_map to a separate malicious backend repo; loading the frontend will silently run the backend’s code on the victim host. This vulnerability is fixed in 0.11.1.
Exploitation Scenario
An adversary registers two GitHub/Hugging Face accounts. The first hosts a legitimate-looking multimodal model repository (frontend repo) with a well-crafted README, model card, and config.json. The config.json includes an auto_map field pointing to the adversary's second repository (backend repo) which hosts a malicious Python class. The frontend repo is promoted in AI/ML communities, referenced in blog posts, or submitted to model leaderboards to build credibility. A target organization's MLOps pipeline or developer runs vllm.LLM('attacker/benign-model', trust_remote_code=False) — the False flag is ignored, vLLM resolves get_class_from_dynamic_module against the auto_map URL, fetches the malicious Python from the backend repo, and executes it on the inference host. The payload can drop a reverse shell, exfiltrate environment variables (AWS credentials, OpenAI keys, internal API tokens), or install a persistent backdoor. The entire compromise happens silently before any inference request is processed.
CVSS Vector
CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H References
Timeline
Related Vulnerabilities
CVE-2024-9053 9.8 vllm: RCE via unsafe pickle deserialization in RPC server
Same package: vllm CVE-2026-25960 9.8 vllm: SSRF allows internal network access
Same package: vllm CVE-2025-47277 9.8 vLLM: RCE via exposed TCPStore in distributed inference
Same package: vllm CVE-2024-11041 9.8 vllm: RCE via unsafe pickle deserialization in MessageQueue
Same package: vllm CVE-2025-32444 9.8 vLLM: RCE via pickle deserialization on ZeroMQ
Same package: vllm
AI Threat Alert