CVE-2025-14929: transformers: Deserialization enables RCE

UNKNOWN
Published December 23, 2025
CISO Take

CVE-2025-14929 is a deserialization RCE in Hugging Face Transformers' X-CLIP checkpoint loading — the most widely deployed ML framework globally. Any team that loads X-CLIP model checkpoints from external sources (Hugging Face Hub, shared drives, third-party repos) is one malicious file away from full process compromise. Immediate action: audit all checkpoint loading workflows, restrict sources to cryptographically verified internal registries, and isolate model loading processes pending an official patch.

Risk Assessment

Effective risk is HIGH despite the absence of a formal CVSS score. Deserialization RCEs (CWE-502) consistently score 9.8 in analogous CVEs. The 'user interaction required' qualifier is misleading in ML contexts: downloading and loading model checkpoints is a routine, automated workflow in most MLOps pipelines — not a suspicious action that would trigger user scrutiny. Attack surface is broad given Transformers' ubiquity. Exploitation is low-complexity for an attacker who can position a malicious checkpoint where ML engineers or automated pipelines will retrieve it.

Affected Systems

Package Ecosystem Vulnerable Range Patched
transformers pip No patch
160.2K OpenSSF 4.9 7.8K dependents Pushed 6d ago 39% patched ~101d to patch Full package profile →

Do you use transformers? You're affected.

Severity & Risk

CVSS 3.1
N/A
EPSS
0.2%
chance of exploitation in 30 days
Higher than 48% of all CVEs
Exploitation Status
No known exploitation
Sophistication
Moderate

Recommended Action

6 steps
  1. Pin and verify: restrict model checkpoint loading to a curated internal registry with SHA-256 digest verification before any load operation.

  2. Isolate: run all model loading and checkpoint conversion in ephemeral, network-restricted containers with no access to production credentials or data.

  3. Avoid unsafe deserialization: audit code for pickle.load, torch.load without weights_only=True, and Transformers checkpoint conversion calls on untrusted inputs.

  4. Monitor: alert on unexpected child process spawning or outbound network connections from model serving or training processes.

  5. Patch: watch the Hugging Face Transformers GitHub and apply the fix immediately upon release — ZDI typically coordinates disclosure with vendor patches.

  6. Inventory: identify all CI/CD jobs and automated pipelines that call Transformers checkpoint conversion utilities and treat them as high-risk until patched.

CISA SSVC Assessment

Decision Track
Exploitation none
Automatable No
Technical Impact total

Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Article 9 - Risk management system
ISO 42001
A.6.1.3 - AI system risk treatment A.6.2.3 - AI supplier and third-party relationships A.8.4 - AI suppliers
NIST AI RMF
MANAGE 2.2 - Treatments, responses, and recovery for identified risks
OWASP LLM Top 10
LLM03:2025 - Supply Chain LLM05:2025 - Supply Chain Vulnerabilities

Frequently Asked Questions

What is CVE-2025-14929?

CVE-2025-14929 is a deserialization RCE in Hugging Face Transformers' X-CLIP checkpoint loading — the most widely deployed ML framework globally. Any team that loads X-CLIP model checkpoints from external sources (Hugging Face Hub, shared drives, third-party repos) is one malicious file away from full process compromise. Immediate action: audit all checkpoint loading workflows, restrict sources to cryptographically verified internal registries, and isolate model loading processes pending an official patch.

Is CVE-2025-14929 actively exploited?

No confirmed active exploitation of CVE-2025-14929 has been reported, but organizations should still patch proactively.

How to fix CVE-2025-14929?

1. Pin and verify: restrict model checkpoint loading to a curated internal registry with SHA-256 digest verification before any load operation. 2. Isolate: run all model loading and checkpoint conversion in ephemeral, network-restricted containers with no access to production credentials or data. 3. Avoid unsafe deserialization: audit code for `pickle.load`, `torch.load` without `weights_only=True`, and Transformers checkpoint conversion calls on untrusted inputs. 4. Monitor: alert on unexpected child process spawning or outbound network connections from model serving or training processes. 5. Patch: watch the Hugging Face Transformers GitHub and apply the fix immediately upon release — ZDI typically coordinates disclosure with vendor patches. 6. Inventory: identify all CI/CD jobs and automated pipelines that call Transformers checkpoint conversion utilities and treat them as high-risk until patched.

What systems are affected by CVE-2025-14929?

This vulnerability affects the following AI/ML architecture patterns: training pipelines, model serving, MLOps pipelines, fine-tuning workflows, multimodal AI pipelines.

What is the CVSS score for CVE-2025-14929?

No CVSS score has been assigned yet.

Technical Details

NVD Description

Hugging Face Transformers X-CLIP Checkpoint Conversion Deserialization of Untrusted Data Remote Code Execution Vulnerability. This vulnerability allows remote attackers to execute arbitrary code on affected installations of Hugging Face Transformers. User interaction is required to exploit this vulnerability in that the target must visit a malicious page or open a malicious file. The specific flaw exists within the parsing of checkpoints. The issue results from the lack of proper validation of user-supplied data, which can result in deserialization of untrusted data. An attacker can leverage this vulnerability to execute code in the context of the current process. Was ZDI-CAN-28308.

Exploitation Scenario

An adversary crafts a malicious X-CLIP checkpoint file by serializing a Python object with a `__reduce__` method that executes a reverse shell payload. The file is published to Hugging Face Hub under a typosquatted or compromised account mimicking a legitimate X-CLIP variant (e.g., 'microsoft/xclip-base-patch32-finetuned'). A data scientist or automated MLOps pipeline calls `XCLIPModel.from_pretrained()` on the malicious checkpoint during a fine-tuning or evaluation job. Transformers' checkpoint conversion code deserializes the file without validation, triggering code execution. The attacker gains a shell in the training environment, exfiltrates model weights, API keys, and dataset access credentials, then pivots to the broader ML infrastructure.

Weaknesses (CWE)

Timeline

Published
December 23, 2025
Last Modified
January 21, 2026
First Seen
December 23, 2025

Related Vulnerabilities