CVE-2025-14920: transformers: Deserialization enables RCE

UNKNOWN
Published December 23, 2025
CISO Take

CVE-2025-14920 is a deserialization RCE in Hugging Face Transformers' Perceiver model loader — an attacker can achieve full code execution on any system that loads a malicious model file. Organizations pulling models from HuggingFace Hub, shared drives, or external sources are directly exposed. Immediate action: audit where Transformers is deployed, restrict model loading to verified/signed sources, and patch or pin to a fixed version once available.

Risk Assessment

High risk despite missing CVSS score. CWE-502 deserialization flaws are historically scored 8.8–9.8 when paired with RCE impact. The attack requires user interaction (loading a malicious file), which lowers opportunistic risk but is trivially bypassed in ML workflows where engineers routinely download third-party models. The real threat vector is supply chain: a poisoned model uploaded to HuggingFace Hub or a shared S3 bucket. AI/ML teams run Transformers locally with full user privileges, compounding blast radius. No patch or CVSS details published yet as of the disclosure date (2025-12-23), indicating early-stage disclosure.

Affected Systems

Package Ecosystem Vulnerable Range Patched
transformers pip No patch
160.2K OpenSSF 4.9 7.8K dependents Pushed 6d ago 39% patched ~101d to patch Full package profile →

Do you use transformers? You're affected.

Severity & Risk

CVSS 3.1
N/A
EPSS
0.5%
chance of exploitation in 30 days
Higher than 66% of all CVEs
Exploitation Status
No known exploitation
Sophistication
Moderate

Recommended Action

7 steps
  1. PATCH

    Monitor HuggingFace Transformers releases for a fix targeting the Perceiver model file parser; pin to patched version immediately.

  2. INVENTORY

    Identify all environments with transformers installed that load Perceiver models (grep -r 'Perceiver' --include='*.py').

  3. RESTRICT MODEL SOURCES

    Enforce allowlisting — only load models from internal artifact registries with SHA-256 hash verification. Reject models from arbitrary URLs or unverified Hub accounts.

  4. SANDBOX

    Run model loading in isolated environments (separate containers/VMs with no network egress) and inspect model files with tools like fickling before loading in production.

  5. DETECT

    Alert on unexpected network connections or child process spawns from Python ML processes.

  6. WORKAROUND

    If Perceiver models are not in use, block their loading at the framework level or remove the model class from the import path.

  7. REVIEW

    Audit recent downloads of Perceiver model files from public registries for tampering.

CISA SSVC Assessment

Decision Track
Exploitation none
Automatable No
Technical Impact total

Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Article 15 - Accuracy, robustness and cybersecurity Article 17 - Quality management system Article 9 - Risk management system
ISO 42001
A.10.1 - AI system supply chain security A.6.1.3 - AI supply chain security A.8.4 - AI system technical security
NIST AI RMF
GOVERN-6.1 - AI supply chain risk management GV-1.6 - Organizational risk tolerance MANAGE-2.2 - Mechanisms to respond to AI risks MS-2.5 - Practices and personnel for AI risk management
OWASP LLM Top 10
LLM03:2025 - Supply Chain Vulnerabilities LLM05 - Supply Chain Vulnerabilities

Frequently Asked Questions

What is CVE-2025-14920?

CVE-2025-14920 is a deserialization RCE in Hugging Face Transformers' Perceiver model loader — an attacker can achieve full code execution on any system that loads a malicious model file. Organizations pulling models from HuggingFace Hub, shared drives, or external sources are directly exposed. Immediate action: audit where Transformers is deployed, restrict model loading to verified/signed sources, and patch or pin to a fixed version once available.

Is CVE-2025-14920 actively exploited?

No confirmed active exploitation of CVE-2025-14920 has been reported, but organizations should still patch proactively.

How to fix CVE-2025-14920?

1. PATCH: Monitor HuggingFace Transformers releases for a fix targeting the Perceiver model file parser; pin to patched version immediately. 2. INVENTORY: Identify all environments with `transformers` installed that load Perceiver models (`grep -r 'Perceiver' --include='*.py'`). 3. RESTRICT MODEL SOURCES: Enforce allowlisting — only load models from internal artifact registries with SHA-256 hash verification. Reject models from arbitrary URLs or unverified Hub accounts. 4. SANDBOX: Run model loading in isolated environments (separate containers/VMs with no network egress) and inspect model files with tools like `fickling` before loading in production. 5. DETECT: Alert on unexpected network connections or child process spawns from Python ML processes. 6. WORKAROUND: If Perceiver models are not in use, block their loading at the framework level or remove the model class from the import path. 7. REVIEW: Audit recent downloads of Perceiver model files from public registries for tampering.

What systems are affected by CVE-2025-14920?

This vulnerability affects the following AI/ML architecture patterns: model serving, training pipelines, MLOps platforms, data science environments, model registries.

What is the CVSS score for CVE-2025-14920?

No CVSS score has been assigned yet.

Technical Details

NVD Description

Hugging Face Transformers Perceiver Model Deserialization of Untrusted Data Remote Code Execution Vulnerability. This vulnerability allows remote attackers to execute arbitrary code on affected installations of Hugging Face Transformers. User interaction is required to exploit this vulnerability in that the target must visit a malicious page or open a malicious file. The specific flaw exists within the parsing of model files. The issue results from the lack of proper validation of user-supplied data, which can result in deserialization of untrusted data. An attacker can leverage this vulnerability to execute code in the context of the current user. Was ZDI-CAN-25423.

Exploitation Scenario

An adversary crafts a malicious Perceiver model file containing a serialized Python object that executes arbitrary commands upon deserialization (classic pickle exploit pattern: `__reduce__` returning `os.system` or `subprocess`). The attacker uploads this model to HuggingFace Hub under a plausible name (e.g., 'perceiver-base-finetuned-images-v2'). A data scientist or automated pipeline calls `PerceiverModel.from_pretrained('attacker/perceiver-base-finetuned-images-v2')`, the Transformers library deserializes the model file without validation, and the payload executes — establishing a reverse shell, exfiltrating API keys from environment variables, or pivoting to internal infrastructure. In CI/CD contexts where models are pulled during training jobs, this achieves server-side RCE with no further attacker interaction.

Weaknesses (CWE)

Timeline

Published
December 23, 2025
Last Modified
January 21, 2026
First Seen
December 23, 2025

Related Vulnerabilities