CVE-2025-14929: transformers: Deserialization enables RCE
UNKNOWNCVE-2025-14929 is a deserialization RCE in Hugging Face Transformers' X-CLIP checkpoint loading — the most widely deployed ML framework globally. Any team that loads X-CLIP model checkpoints from external sources (Hugging Face Hub, shared drives, third-party repos) is one malicious file away from full process compromise. Immediate action: audit all checkpoint loading workflows, restrict sources to cryptographically verified internal registries, and isolate model loading processes pending an official patch.
Risk Assessment
Effective risk is HIGH despite the absence of a formal CVSS score. Deserialization RCEs (CWE-502) consistently score 9.8 in analogous CVEs. The 'user interaction required' qualifier is misleading in ML contexts: downloading and loading model checkpoints is a routine, automated workflow in most MLOps pipelines — not a suspicious action that would trigger user scrutiny. Attack surface is broad given Transformers' ubiquity. Exploitation is low-complexity for an attacker who can position a malicious checkpoint where ML engineers or automated pipelines will retrieve it.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| transformers | pip | — | No patch |
Do you use transformers? You're affected.
Severity & Risk
Recommended Action
6 steps-
Pin and verify: restrict model checkpoint loading to a curated internal registry with SHA-256 digest verification before any load operation.
-
Isolate: run all model loading and checkpoint conversion in ephemeral, network-restricted containers with no access to production credentials or data.
-
Avoid unsafe deserialization: audit code for
pickle.load,torch.loadwithoutweights_only=True, and Transformers checkpoint conversion calls on untrusted inputs. -
Monitor: alert on unexpected child process spawning or outbound network connections from model serving or training processes.
-
Patch: watch the Hugging Face Transformers GitHub and apply the fix immediately upon release — ZDI typically coordinates disclosure with vendor patches.
-
Inventory: identify all CI/CD jobs and automated pipelines that call Transformers checkpoint conversion utilities and treat them as high-risk until patched.
CISA SSVC Assessment
Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2025-14929?
CVE-2025-14929 is a deserialization RCE in Hugging Face Transformers' X-CLIP checkpoint loading — the most widely deployed ML framework globally. Any team that loads X-CLIP model checkpoints from external sources (Hugging Face Hub, shared drives, third-party repos) is one malicious file away from full process compromise. Immediate action: audit all checkpoint loading workflows, restrict sources to cryptographically verified internal registries, and isolate model loading processes pending an official patch.
Is CVE-2025-14929 actively exploited?
No confirmed active exploitation of CVE-2025-14929 has been reported, but organizations should still patch proactively.
How to fix CVE-2025-14929?
1. Pin and verify: restrict model checkpoint loading to a curated internal registry with SHA-256 digest verification before any load operation. 2. Isolate: run all model loading and checkpoint conversion in ephemeral, network-restricted containers with no access to production credentials or data. 3. Avoid unsafe deserialization: audit code for `pickle.load`, `torch.load` without `weights_only=True`, and Transformers checkpoint conversion calls on untrusted inputs. 4. Monitor: alert on unexpected child process spawning or outbound network connections from model serving or training processes. 5. Patch: watch the Hugging Face Transformers GitHub and apply the fix immediately upon release — ZDI typically coordinates disclosure with vendor patches. 6. Inventory: identify all CI/CD jobs and automated pipelines that call Transformers checkpoint conversion utilities and treat them as high-risk until patched.
What systems are affected by CVE-2025-14929?
This vulnerability affects the following AI/ML architecture patterns: training pipelines, model serving, MLOps pipelines, fine-tuning workflows, multimodal AI pipelines.
What is the CVSS score for CVE-2025-14929?
No CVSS score has been assigned yet.
Technical Details
NVD Description
Hugging Face Transformers X-CLIP Checkpoint Conversion Deserialization of Untrusted Data Remote Code Execution Vulnerability. This vulnerability allows remote attackers to execute arbitrary code on affected installations of Hugging Face Transformers. User interaction is required to exploit this vulnerability in that the target must visit a malicious page or open a malicious file. The specific flaw exists within the parsing of checkpoints. The issue results from the lack of proper validation of user-supplied data, which can result in deserialization of untrusted data. An attacker can leverage this vulnerability to execute code in the context of the current process. Was ZDI-CAN-28308.
Exploitation Scenario
An adversary crafts a malicious X-CLIP checkpoint file by serializing a Python object with a `__reduce__` method that executes a reverse shell payload. The file is published to Hugging Face Hub under a typosquatted or compromised account mimicking a legitimate X-CLIP variant (e.g., 'microsoft/xclip-base-patch32-finetuned'). A data scientist or automated MLOps pipeline calls `XCLIPModel.from_pretrained()` on the malicious checkpoint during a fine-tuning or evaluation job. Transformers' checkpoint conversion code deserializes the file without validation, triggering code execution. The attacker gains a shell in the training environment, exfiltrates model weights, API keys, and dataset access credentials, then pivots to the broader ML infrastructure.
Weaknesses (CWE)
References
Timeline
Related Vulnerabilities
CVE-2024-3568 9.6 HuggingFace Transformers: RCE via pickle deserialization
Same package: transformers CVE-2024-11393 8.8 Transformers: RCE via MaskFormer model deserialization
Same package: transformers CVE-2023-6730 8.8 HuggingFace Transformers: RCE via unsafe deserialization
Same package: transformers CVE-2024-11392 8.8 HuggingFace Transformers: RCE via config deserialization
Same package: transformers CVE-2024-11394 8.8 Transformers: RCE via Trax model deserialization
Same package: transformers
AI Threat Alert