CVE-2025-14921: transformers: Deserialization enables RCE

UNKNOWN
Published December 23, 2025
CISO Take

If your organization loads Transformer-XL models from any external source — Hugging Face Hub, shared storage, or third-party repos — you have a live RCE exposure. Update the transformers library immediately and enforce model-source allow-listing. Until patched, treat any externally-sourced Transformer-XL model file as untrusted and sandbox or block its loading in production and CI/CD pipelines.

Risk Assessment

Despite the absent CVSS score, CWE-502 deserialization RCE vulnerabilities historically land at Critical (CVSS 9.0+). The 'user interaction required' qualifier is effectively meaningless in ML contexts — loading a pre-trained model via from_pretrained() is a routine, unsuspicious developer action that provides no security barrier. Hugging Face Transformers is deployed across hundreds of thousands of organizations, making the blast radius exceptionally large. Exploitation complexity is moderate: crafting a malicious serialized pickle-based model file is a documented, repeatable technique that does not require novel AI expertise. Risk is elevated for teams that pull models from public registries without cryptographic integrity verification.

Affected Systems

Package Ecosystem Vulnerable Range Patched
transformers pip No patch
160.2K OpenSSF 4.9 7.8K dependents Pushed 6d ago 39% patched ~101d to patch Full package profile →

Do you use transformers? You're affected.

Severity & Risk

CVSS 3.1
N/A
EPSS
0.5%
chance of exploitation in 30 days
Higher than 66% of all CVEs
Exploitation Status
No known exploitation
Sophistication
Moderate

Recommended Action

7 steps
  1. PATCH immediately: upgrade huggingface/transformers to the latest release; check the ZDI advisory at zerodayinitiative.com/advisories/ZDI-25-1149 for the confirmed fixed version.

  2. AUDIT

    inventory all code paths using Transformer-XL model loading; grep for AutoModelForSequenceClassification, TransfoXLModel, from_pretrained with Transformer-XL checkpoints.

  3. RESTRICT sources: implement an allow-list of trusted model sources and block loading from arbitrary URLs or untrusted registries.

  4. VERIFY integrity: validate SHA256 checksums or cryptographic signatures of model files before loading.

  5. SANDBOX

    run model loading in isolated containers or VMs with no cloud credential access and network egress filtering.

  6. DETECT

    alert on unexpected child process spawning (subprocess, os.system) originating from Python ML processes.

  7. ROTATE

    if compromise is suspected, rotate any credentials accessible to ML workloads or serving infrastructure.

CISA SSVC Assessment

Decision Track
Exploitation none
Automatable No
Technical Impact total

Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Article 13 - Transparency and provision of information to deployers Article 15 - Accuracy, robustness and cybersecurity Article 9 - Risk management system
ISO 42001
A.10.1 - AI supply chain A.6.2 - AI supply chain management A.8.2 - AI system security controls
NIST AI RMF
GOVERN-6.1 - Policies and procedures for AI supply chain risk management MANAGE 2.2 - Mechanisms for tracking and responding to identified AI risks MANAGE-2.2 - Mechanisms to sustain AI risk management
OWASP LLM Top 10
LLM03:2025 - Supply Chain Vulnerabilities LLM05:2025 - Supply Chain Vulnerabilities

Frequently Asked Questions

What is CVE-2025-14921?

If your organization loads Transformer-XL models from any external source — Hugging Face Hub, shared storage, or third-party repos — you have a live RCE exposure. Update the transformers library immediately and enforce model-source allow-listing. Until patched, treat any externally-sourced Transformer-XL model file as untrusted and sandbox or block its loading in production and CI/CD pipelines.

Is CVE-2025-14921 actively exploited?

No confirmed active exploitation of CVE-2025-14921 has been reported, but organizations should still patch proactively.

How to fix CVE-2025-14921?

1. PATCH immediately: upgrade huggingface/transformers to the latest release; check the ZDI advisory at zerodayinitiative.com/advisories/ZDI-25-1149 for the confirmed fixed version. 2. AUDIT: inventory all code paths using Transformer-XL model loading; grep for AutoModelForSequenceClassification, TransfoXLModel, from_pretrained with Transformer-XL checkpoints. 3. RESTRICT sources: implement an allow-list of trusted model sources and block loading from arbitrary URLs or untrusted registries. 4. VERIFY integrity: validate SHA256 checksums or cryptographic signatures of model files before loading. 5. SANDBOX: run model loading in isolated containers or VMs with no cloud credential access and network egress filtering. 6. DETECT: alert on unexpected child process spawning (subprocess, os.system) originating from Python ML processes. 7. ROTATE: if compromise is suspected, rotate any credentials accessible to ML workloads or serving infrastructure.

What systems are affected by CVE-2025-14921?

This vulnerability affects the following AI/ML architecture patterns: model serving, training pipelines, fine-tuning pipelines, ML development environments, model registries, CI/CD pipelines for ML.

What is the CVSS score for CVE-2025-14921?

No CVSS score has been assigned yet.

Technical Details

NVD Description

Hugging Face Transformers Transformer-XL Model Deserialization of Untrusted Data Remote Code Execution Vulnerability. This vulnerability allows remote attackers to execute arbitrary code on affected installations of Hugging Face Transformers. User interaction is required to exploit this vulnerability in that the target must visit a malicious page or open a malicious file. The specific flaw exists within the parsing of model files. The issue results from the lack of proper validation of user-supplied data, which can result in deserialization of untrusted data. An attacker can leverage this vulnerability to execute code in the context of the current user. Was ZDI-CAN-25424.

Exploitation Scenario

An adversary registers a typosquatting account on Hugging Face Hub and publishes a poisoned Transformer-XL model checkpoint under a name close to a popular repo (e.g., 'transfo-xl-wt103-finetuned'). The malicious model file embeds a crafted pickle payload within its serialized weights. A data scientist or automated CI pipeline calls from_pretrained('attacker/transfo-xl-wt103-finetuned') for evaluation or fine-tuning. During deserialization, the pickle payload executes arbitrary Python code in the loading process context — establishing a reverse shell to an attacker-controlled server, exfiltrating cloud credentials from environment variables, or installing a persistent backdoor. In a model serving scenario, this gives the attacker persistent RCE on the inference server with access to all models, API keys, and downstream data stores.

Weaknesses (CWE)

Timeline

Published
December 23, 2025
Last Modified
January 21, 2026
First Seen
December 23, 2025

Related Vulnerabilities