CVE-2025-14927: transformers: Code Injection enables RCE
UNKNOWNCVE-2025-14927 is an RCE via code injection in Hugging Face Transformers' SEW-D checkpoint conversion function — a routine ML operation that engineers perform without suspicion. Any organization that downloads and converts external model checkpoints is exposed; a single poisoned checkpoint on Hugging Face Hub is enough to compromise a developer workstation or training server. Patch Transformers immediately, enforce an approved-sources allowlist for checkpoints, and run all conversion operations in isolated sandboxes.
Risk Assessment
Despite lacking a published CVSS score, the risk is HIGH. Code injection via unvalidated user-supplied strings in a popular ML framework (Hugging Face Transformers) with a wide install base creates broad exposure. Exploitation is realistic: ML engineers routinely download and convert checkpoints from public repositories as part of standard workflows, and the 'user interaction required' qualifier maps directly to normal day-to-day ML operations — not a security-conscious action. The blast radius includes CI/CD pipelines, training servers, and developer workstations, which typically hold cloud credentials, dataset access, and model artifacts.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| transformers | pip | — | No patch |
Do you use transformers? You're affected.
Severity & Risk
Recommended Action
6 steps-
PATCH
Upgrade huggingface/transformers to the version that addresses ZDI-25-1148 — monitor the official GitHub repo and PyPI for the patched release.
-
ALLOWLIST
Restrict checkpoint sources to verified, internal registries or a curated subset of Hugging Face Hub organizations. Reject conversion of checkpoints from unknown authors.
-
SANDBOX
Run all model conversion and checkpoint loading operations in ephemeral, network-isolated containers with no access to credentials or sensitive data.
-
AUDIT
Review CI/CD pipelines and training scripts for any automated checkpoint downloads and conversions — prioritize those running with elevated cloud permissions.
-
DETECT
Monitor for unexpected subprocess spawning or network calls from Python processes that are performing model loading operations. Alert on convert_config invocations against externally sourced checkpoints.
-
REVIEW
Scan internal model registries for SEW-D checkpoints obtained from external sources before the patch is applied.
CISA SSVC Assessment
Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2025-14927?
CVE-2025-14927 is an RCE via code injection in Hugging Face Transformers' SEW-D checkpoint conversion function — a routine ML operation that engineers perform without suspicion. Any organization that downloads and converts external model checkpoints is exposed; a single poisoned checkpoint on Hugging Face Hub is enough to compromise a developer workstation or training server. Patch Transformers immediately, enforce an approved-sources allowlist for checkpoints, and run all conversion operations in isolated sandboxes.
Is CVE-2025-14927 actively exploited?
No confirmed active exploitation of CVE-2025-14927 has been reported, but organizations should still patch proactively.
How to fix CVE-2025-14927?
1. PATCH: Upgrade huggingface/transformers to the version that addresses ZDI-25-1148 — monitor the official GitHub repo and PyPI for the patched release. 2. ALLOWLIST: Restrict checkpoint sources to verified, internal registries or a curated subset of Hugging Face Hub organizations. Reject conversion of checkpoints from unknown authors. 3. SANDBOX: Run all model conversion and checkpoint loading operations in ephemeral, network-isolated containers with no access to credentials or sensitive data. 4. AUDIT: Review CI/CD pipelines and training scripts for any automated checkpoint downloads and conversions — prioritize those running with elevated cloud permissions. 5. DETECT: Monitor for unexpected subprocess spawning or network calls from Python processes that are performing model loading operations. Alert on convert_config invocations against externally sourced checkpoints. 6. REVIEW: Scan internal model registries for SEW-D checkpoints obtained from external sources before the patch is applied.
What systems are affected by CVE-2025-14927?
This vulnerability affects the following AI/ML architecture patterns: training pipelines, model conversion workflows, MLOps pipelines, model serving, fine-tuning infrastructure.
What is the CVSS score for CVE-2025-14927?
No CVSS score has been assigned yet.
Technical Details
NVD Description
Hugging Face Transformers SEW-D convert_config Code Injection Remote Code Execution Vulnerability. This vulnerability allows remote attackers to execute arbitrary code on affected installations of Hugging Face Transformers. User interaction is required to exploit this vulnerability in that the target must convert a malicious checkpoint. The specific flaw exists within the convert_config function. The issue results from the lack of proper validation of a user-supplied string before using it to execute Python code. An attacker can leverage this vulnerability to execute code in the context of the current user. . Was ZDI-CAN-28252.
Exploitation Scenario
An adversary publishes a weaponized SEW-D model checkpoint to Hugging Face Hub under a plausible researcher or organization account. The checkpoint's configuration contains a crafted string that, when processed by convert_config, is passed unsanitized to a Python code execution path. An ML engineer — or an automated training pipeline — downloads the checkpoint and runs conversion as part of a fine-tuning or evaluation workflow. The injected code executes in the engineer's context: it exfiltrates cloud credentials from environment variables or ~/.aws/credentials, deploys a reverse shell back to attacker infrastructure, or silently poisons the training dataset. Because the attack surface is a trusted developer operation, it bypasses most security controls and goes undetected until significant damage is done.
Weaknesses (CWE)
References
Timeline
Related Vulnerabilities
CVE-2024-3568 9.6 HuggingFace Transformers: RCE via pickle deserialization
Same package: transformers CVE-2024-11393 8.8 Transformers: RCE via MaskFormer model deserialization
Same package: transformers CVE-2023-6730 8.8 HuggingFace Transformers: RCE via unsafe deserialization
Same package: transformers CVE-2024-11392 8.8 HuggingFace Transformers: RCE via config deserialization
Same package: transformers CVE-2024-11394 8.8 Transformers: RCE via Trax model deserialization
Same package: transformers
AI Threat Alert