CVE-2025-14927
UNKNOWNCVE-2025-14927 is an RCE via code injection in Hugging Face Transformers' SEW-D checkpoint conversion function — a routine ML operation that engineers perform without suspicion. Any organization that downloads and converts external model checkpoints is exposed; a single poisoned checkpoint on Hugging Face Hub is enough to compromise a developer workstation or training server. Patch Transformers immediately, enforce an approved-sources allowlist for checkpoints, and run all conversion operations in isolated sandboxes.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| transformers | pip | — | No patch |
Do you use transformers? You're affected.
Severity & Risk
Recommended Action
- 1. PATCH: Upgrade huggingface/transformers to the version that addresses ZDI-25-1148 — monitor the official GitHub repo and PyPI for the patched release. 2. ALLOWLIST: Restrict checkpoint sources to verified, internal registries or a curated subset of Hugging Face Hub organizations. Reject conversion of checkpoints from unknown authors. 3. SANDBOX: Run all model conversion and checkpoint loading operations in ephemeral, network-isolated containers with no access to credentials or sensitive data. 4. AUDIT: Review CI/CD pipelines and training scripts for any automated checkpoint downloads and conversions — prioritize those running with elevated cloud permissions. 5. DETECT: Monitor for unexpected subprocess spawning or network calls from Python processes that are performing model loading operations. Alert on convert_config invocations against externally sourced checkpoints. 6. REVIEW: Scan internal model registries for SEW-D checkpoints obtained from external sources before the patch is applied.
Classification
Compliance Impact
This CVE is relevant to:
Technical Details
NVD Description
Hugging Face Transformers SEW-D convert_config Code Injection Remote Code Execution Vulnerability. This vulnerability allows remote attackers to execute arbitrary code on affected installations of Hugging Face Transformers. User interaction is required to exploit this vulnerability in that the target must convert a malicious checkpoint. The specific flaw exists within the convert_config function. The issue results from the lack of proper validation of a user-supplied string before using it to execute Python code. An attacker can leverage this vulnerability to execute code in the context of the current user. . Was ZDI-CAN-28252.
Exploitation Scenario
An adversary publishes a weaponized SEW-D model checkpoint to Hugging Face Hub under a plausible researcher or organization account. The checkpoint's configuration contains a crafted string that, when processed by convert_config, is passed unsanitized to a Python code execution path. An ML engineer — or an automated training pipeline — downloads the checkpoint and runs conversion as part of a fine-tuning or evaluation workflow. The injected code executes in the engineer's context: it exfiltrates cloud credentials from environment variables or ~/.aws/credentials, deploys a reverse shell back to attacker infrastructure, or silently poisons the training dataset. Because the attack surface is a trusted developer operation, it bypasses most security controls and goes undetected until significant damage is done.