CVE-2025-14920: transformers: Deserialization enables RCE
UNKNOWNCVE-2025-14920 is a deserialization RCE in Hugging Face Transformers' Perceiver model loader — an attacker can achieve full code execution on any system that loads a malicious model file. Organizations pulling models from HuggingFace Hub, shared drives, or external sources are directly exposed. Immediate action: audit where Transformers is deployed, restrict model loading to verified/signed sources, and patch or pin to a fixed version once available.
Risk Assessment
High risk despite missing CVSS score. CWE-502 deserialization flaws are historically scored 8.8–9.8 when paired with RCE impact. The attack requires user interaction (loading a malicious file), which lowers opportunistic risk but is trivially bypassed in ML workflows where engineers routinely download third-party models. The real threat vector is supply chain: a poisoned model uploaded to HuggingFace Hub or a shared S3 bucket. AI/ML teams run Transformers locally with full user privileges, compounding blast radius. No patch or CVSS details published yet as of the disclosure date (2025-12-23), indicating early-stage disclosure.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| transformers | pip | — | No patch |
Do you use transformers? You're affected.
Severity & Risk
Recommended Action
7 steps-
PATCH
Monitor HuggingFace Transformers releases for a fix targeting the Perceiver model file parser; pin to patched version immediately.
-
INVENTORY
Identify all environments with
transformersinstalled that load Perceiver models (grep -r 'Perceiver' --include='*.py'). -
RESTRICT MODEL SOURCES
Enforce allowlisting — only load models from internal artifact registries with SHA-256 hash verification. Reject models from arbitrary URLs or unverified Hub accounts.
-
SANDBOX
Run model loading in isolated environments (separate containers/VMs with no network egress) and inspect model files with tools like
ficklingbefore loading in production. -
DETECT
Alert on unexpected network connections or child process spawns from Python ML processes.
-
WORKAROUND
If Perceiver models are not in use, block their loading at the framework level or remove the model class from the import path.
-
REVIEW
Audit recent downloads of Perceiver model files from public registries for tampering.
CISA SSVC Assessment
Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2025-14920?
CVE-2025-14920 is a deserialization RCE in Hugging Face Transformers' Perceiver model loader — an attacker can achieve full code execution on any system that loads a malicious model file. Organizations pulling models from HuggingFace Hub, shared drives, or external sources are directly exposed. Immediate action: audit where Transformers is deployed, restrict model loading to verified/signed sources, and patch or pin to a fixed version once available.
Is CVE-2025-14920 actively exploited?
No confirmed active exploitation of CVE-2025-14920 has been reported, but organizations should still patch proactively.
How to fix CVE-2025-14920?
1. PATCH: Monitor HuggingFace Transformers releases for a fix targeting the Perceiver model file parser; pin to patched version immediately. 2. INVENTORY: Identify all environments with `transformers` installed that load Perceiver models (`grep -r 'Perceiver' --include='*.py'`). 3. RESTRICT MODEL SOURCES: Enforce allowlisting — only load models from internal artifact registries with SHA-256 hash verification. Reject models from arbitrary URLs or unverified Hub accounts. 4. SANDBOX: Run model loading in isolated environments (separate containers/VMs with no network egress) and inspect model files with tools like `fickling` before loading in production. 5. DETECT: Alert on unexpected network connections or child process spawns from Python ML processes. 6. WORKAROUND: If Perceiver models are not in use, block their loading at the framework level or remove the model class from the import path. 7. REVIEW: Audit recent downloads of Perceiver model files from public registries for tampering.
What systems are affected by CVE-2025-14920?
This vulnerability affects the following AI/ML architecture patterns: model serving, training pipelines, MLOps platforms, data science environments, model registries.
What is the CVSS score for CVE-2025-14920?
No CVSS score has been assigned yet.
Technical Details
NVD Description
Hugging Face Transformers Perceiver Model Deserialization of Untrusted Data Remote Code Execution Vulnerability. This vulnerability allows remote attackers to execute arbitrary code on affected installations of Hugging Face Transformers. User interaction is required to exploit this vulnerability in that the target must visit a malicious page or open a malicious file. The specific flaw exists within the parsing of model files. The issue results from the lack of proper validation of user-supplied data, which can result in deserialization of untrusted data. An attacker can leverage this vulnerability to execute code in the context of the current user. Was ZDI-CAN-25423.
Exploitation Scenario
An adversary crafts a malicious Perceiver model file containing a serialized Python object that executes arbitrary commands upon deserialization (classic pickle exploit pattern: `__reduce__` returning `os.system` or `subprocess`). The attacker uploads this model to HuggingFace Hub under a plausible name (e.g., 'perceiver-base-finetuned-images-v2'). A data scientist or automated pipeline calls `PerceiverModel.from_pretrained('attacker/perceiver-base-finetuned-images-v2')`, the Transformers library deserializes the model file without validation, and the payload executes — establishing a reverse shell, exfiltrating API keys from environment variables, or pivoting to internal infrastructure. In CI/CD contexts where models are pulled during training jobs, this achieves server-side RCE with no further attacker interaction.
Weaknesses (CWE)
References
Timeline
Related Vulnerabilities
CVE-2024-3568 9.6 HuggingFace Transformers: RCE via pickle deserialization
Same package: transformers CVE-2024-11393 8.8 Transformers: RCE via MaskFormer model deserialization
Same package: transformers CVE-2023-6730 8.8 HuggingFace Transformers: RCE via unsafe deserialization
Same package: transformers CVE-2024-11392 8.8 HuggingFace Transformers: RCE via config deserialization
Same package: transformers CVE-2024-11394 8.8 Transformers: RCE via Trax model deserialization
Same package: transformers
AI Threat Alert