CVE-2025-14929

UNKNOWN
Published December 23, 2025
CISO Take

CVE-2025-14929 is a deserialization RCE in Hugging Face Transformers' X-CLIP checkpoint loading — the most widely deployed ML framework globally. Any team that loads X-CLIP model checkpoints from external sources (Hugging Face Hub, shared drives, third-party repos) is one malicious file away from full process compromise. Immediate action: audit all checkpoint loading workflows, restrict sources to cryptographically verified internal registries, and isolate model loading processes pending an official patch.

Affected Systems

Package Ecosystem Vulnerable Range Patched
transformers pip No patch

Do you use transformers? You're affected.

Severity & Risk

CVSS 3.1
N/A
EPSS
N/A
KEV Status
Not in KEV
Sophistication
Moderate

Recommended Action

  1. 1. Pin and verify: restrict model checkpoint loading to a curated internal registry with SHA-256 digest verification before any load operation. 2. Isolate: run all model loading and checkpoint conversion in ephemeral, network-restricted containers with no access to production credentials or data. 3. Avoid unsafe deserialization: audit code for `pickle.load`, `torch.load` without `weights_only=True`, and Transformers checkpoint conversion calls on untrusted inputs. 4. Monitor: alert on unexpected child process spawning or outbound network connections from model serving or training processes. 5. Patch: watch the Hugging Face Transformers GitHub and apply the fix immediately upon release — ZDI typically coordinates disclosure with vendor patches. 6. Inventory: identify all CI/CD jobs and automated pipelines that call Transformers checkpoint conversion utilities and treat them as high-risk until patched.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Article 9 - Risk management system
ISO 42001
A.6.1.3 - AI system risk treatment A.6.2.3 - AI supplier and third-party relationships A.8.4 - AI suppliers
NIST AI RMF
MANAGE 2.2 - Treatments, responses, and recovery for identified risks
OWASP LLM Top 10
LLM03:2025 - Supply Chain LLM05:2025 - Supply Chain Vulnerabilities

Technical Details

NVD Description

Hugging Face Transformers X-CLIP Checkpoint Conversion Deserialization of Untrusted Data Remote Code Execution Vulnerability. This vulnerability allows remote attackers to execute arbitrary code on affected installations of Hugging Face Transformers. User interaction is required to exploit this vulnerability in that the target must visit a malicious page or open a malicious file. The specific flaw exists within the parsing of checkpoints. The issue results from the lack of proper validation of user-supplied data, which can result in deserialization of untrusted data. An attacker can leverage this vulnerability to execute code in the context of the current process. Was ZDI-CAN-28308.

Exploitation Scenario

An adversary crafts a malicious X-CLIP checkpoint file by serializing a Python object with a `__reduce__` method that executes a reverse shell payload. The file is published to Hugging Face Hub under a typosquatted or compromised account mimicking a legitimate X-CLIP variant (e.g., 'microsoft/xclip-base-patch32-finetuned'). A data scientist or automated MLOps pipeline calls `XCLIPModel.from_pretrained()` on the malicious checkpoint during a fine-tuning or evaluation job. Transformers' checkpoint conversion code deserializes the file without validation, triggering code execution. The attacker gains a shell in the training environment, exfiltrates model weights, API keys, and dataset access credentials, then pivots to the broader ML infrastructure.

Weaknesses (CWE)

Timeline

Published
December 23, 2025
Last Modified
January 21, 2026
First Seen
December 23, 2025