CVE-2025-14926: transformers: Code Injection enables RCE
UNKNOWNCVE-2025-14926 is a code injection flaw in Hugging Face Transformers that allows RCE when converting malicious model checkpoints. Any team that downloads and converts third-party models — a common MLOps practice — is at risk. Immediately audit Transformers usage across your ML pipelines, patch to a fixed version, and enforce a model provenance policy restricting checkpoint sources to verified publishers.
Risk Assessment
High risk for organizations running ML pipelines that ingest external or community-sourced model checkpoints. While user interaction is required (a practitioner must explicitly run convert_config on a malicious checkpoint), this is standard MLOps behavior when fine-tuning or porting models. The HuggingFace Hub's open-publishing model makes distribution of weaponized checkpoints trivial. No CVSS score is yet assigned, but the CWE-94 (Code Injection) primitive resulting in arbitrary RCE places practical severity at Critical for affected workflows. EPSS is unavailable but exploit complexity is moderate given the well-understood nature of unsafe deserialization/eval patterns in Python ML tooling.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| transformers | pip | — | No patch |
Do you use transformers? You're affected.
Severity & Risk
Recommended Action
6 steps-
PATCH
Upgrade Hugging Face Transformers to the patched version once released; monitor https://github.com/huggingface/transformers/security/advisories for the fix.
-
RESTRICT SOURCES
Enforce a model allowlist — only load checkpoints from verified organizational accounts or cryptographically signed sources. Block unauthenticated HuggingFace Hub downloads in production pipelines.
-
SANDBOX
Run all model conversion and checkpoint loading operations inside isolated containers or VMs with no network access and minimal filesystem permissions.
-
AUDIT CODE
Search codebase for calls to convert_config and any eval/exec patterns in Transformers-dependent code.
-
DETECT
Alert on unexpected child process spawning from Python ML processes, outbound connections from training/inference hosts, and anomalous file writes in model storage directories.
-
INTERIM WORKAROUND
Disable or gate the convert_config workflow until patched; require security review before introducing new model checkpoints into any pipeline.
CISA SSVC Assessment
Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2025-14926?
CVE-2025-14926 is a code injection flaw in Hugging Face Transformers that allows RCE when converting malicious model checkpoints. Any team that downloads and converts third-party models — a common MLOps practice — is at risk. Immediately audit Transformers usage across your ML pipelines, patch to a fixed version, and enforce a model provenance policy restricting checkpoint sources to verified publishers.
Is CVE-2025-14926 actively exploited?
No confirmed active exploitation of CVE-2025-14926 has been reported, but organizations should still patch proactively.
How to fix CVE-2025-14926?
1. PATCH: Upgrade Hugging Face Transformers to the patched version once released; monitor https://github.com/huggingface/transformers/security/advisories for the fix. 2. RESTRICT SOURCES: Enforce a model allowlist — only load checkpoints from verified organizational accounts or cryptographically signed sources. Block unauthenticated HuggingFace Hub downloads in production pipelines. 3. SANDBOX: Run all model conversion and checkpoint loading operations inside isolated containers or VMs with no network access and minimal filesystem permissions. 4. AUDIT CODE: Search codebase for calls to convert_config and any eval/exec patterns in Transformers-dependent code. 5. DETECT: Alert on unexpected child process spawning from Python ML processes, outbound connections from training/inference hosts, and anomalous file writes in model storage directories. 6. INTERIM WORKAROUND: Disable or gate the convert_config workflow until patched; require security review before introducing new model checkpoints into any pipeline.
What systems are affected by CVE-2025-14926?
This vulnerability affects the following AI/ML architecture patterns: training pipelines, model fine-tuning workflows, MLOps model conversion utilities, model serving (pre-deployment conversion stage), CI/CD pipelines with automated model validation.
What is the CVSS score for CVE-2025-14926?
No CVSS score has been assigned yet.
Technical Details
NVD Description
Hugging Face Transformers SEW convert_config Code Injection Remote Code Execution Vulnerability. This vulnerability allows remote attackers to execute arbitrary code on affected installations of Hugging Face Transformers. User interaction is required to exploit this vulnerability in that the target must convert a malicious checkpoint. The specific flaw exists within the convert_config function. The issue results from the lack of proper validation of a user-supplied string before using it to execute Python code. An attacker can leverage this vulnerability to execute code in the context of the current user. Was ZDI-CAN-28251.
Exploitation Scenario
An adversary creates a Hugging Face account and publishes a SEW model checkpoint with a maliciously crafted configuration file. The config contains a payload exploiting the lack of input validation in convert_config — for example, a string value that is passed directly to exec() or eval() in the Python runtime. The attacker promotes the model on social media, AI forums, or submits it as a dependency update to an open-source project. A data scientist or automated MLOps pipeline downloads and calls convert_config on the checkpoint during model evaluation or fine-tuning preparation. Arbitrary code executes with the privileges of the ML process — potentially exfiltrating API keys from environment variables, pivoting to cloud storage containing proprietary training data, or establishing persistence on GPU training infrastructure.
Weaknesses (CWE)
References
Timeline
Related Vulnerabilities
CVE-2024-3568 9.6 HuggingFace Transformers: RCE via pickle deserialization
Same package: transformers CVE-2024-11393 8.8 Transformers: RCE via MaskFormer model deserialization
Same package: transformers CVE-2023-6730 8.8 HuggingFace Transformers: RCE via unsafe deserialization
Same package: transformers CVE-2024-11392 8.8 HuggingFace Transformers: RCE via config deserialization
Same package: transformers CVE-2024-11394 8.8 Transformers: RCE via Trax model deserialization
Same package: transformers
AI Threat Alert