CVE-2025-14927: transformers: Code Injection enables RCE

UNKNOWN
Published December 23, 2025
CISO Take

CVE-2025-14927 is an RCE via code injection in Hugging Face Transformers' SEW-D checkpoint conversion function — a routine ML operation that engineers perform without suspicion. Any organization that downloads and converts external model checkpoints is exposed; a single poisoned checkpoint on Hugging Face Hub is enough to compromise a developer workstation or training server. Patch Transformers immediately, enforce an approved-sources allowlist for checkpoints, and run all conversion operations in isolated sandboxes.

Risk Assessment

Despite lacking a published CVSS score, the risk is HIGH. Code injection via unvalidated user-supplied strings in a popular ML framework (Hugging Face Transformers) with a wide install base creates broad exposure. Exploitation is realistic: ML engineers routinely download and convert checkpoints from public repositories as part of standard workflows, and the 'user interaction required' qualifier maps directly to normal day-to-day ML operations — not a security-conscious action. The blast radius includes CI/CD pipelines, training servers, and developer workstations, which typically hold cloud credentials, dataset access, and model artifacts.

Affected Systems

Package Ecosystem Vulnerable Range Patched
transformers pip No patch
160.2K OpenSSF 4.9 7.8K dependents Pushed 6d ago 39% patched ~101d to patch Full package profile →

Do you use transformers? You're affected.

Severity & Risk

CVSS 3.1
N/A
EPSS
0.1%
chance of exploitation in 30 days
Higher than 34% of all CVEs
Exploitation Status
No known exploitation
Sophistication
Moderate

Recommended Action

6 steps
  1. PATCH

    Upgrade huggingface/transformers to the version that addresses ZDI-25-1148 — monitor the official GitHub repo and PyPI for the patched release.

  2. ALLOWLIST

    Restrict checkpoint sources to verified, internal registries or a curated subset of Hugging Face Hub organizations. Reject conversion of checkpoints from unknown authors.

  3. SANDBOX

    Run all model conversion and checkpoint loading operations in ephemeral, network-isolated containers with no access to credentials or sensitive data.

  4. AUDIT

    Review CI/CD pipelines and training scripts for any automated checkpoint downloads and conversions — prioritize those running with elevated cloud permissions.

  5. DETECT

    Monitor for unexpected subprocess spawning or network calls from Python processes that are performing model loading operations. Alert on convert_config invocations against externally sourced checkpoints.

  6. REVIEW

    Scan internal model registries for SEW-D checkpoints obtained from external sources before the patch is applied.

CISA SSVC Assessment

Decision Track
Exploitation none
Automatable No
Technical Impact total

Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Art. 9 / Art. 17 - Risk management system / Quality management for high-risk AI Article 15 - Accuracy, robustness and cybersecurity
ISO 42001
A.10.1 - AI system supply chain security A.6.2 - AI system development and supply chain
NIST AI RMF
GOVERN 6.1 - Policies for third-party AI component risk MANAGE 2.2 - Risk treatment for AI supply chain MS-2.5 - AI risk management for third-party components
OWASP LLM Top 10
LLM03:2025 - Supply Chain Vulnerabilities LLM05:2025 - Improper Output Handling / Supply Chain Vulnerabilities

Frequently Asked Questions

What is CVE-2025-14927?

CVE-2025-14927 is an RCE via code injection in Hugging Face Transformers' SEW-D checkpoint conversion function — a routine ML operation that engineers perform without suspicion. Any organization that downloads and converts external model checkpoints is exposed; a single poisoned checkpoint on Hugging Face Hub is enough to compromise a developer workstation or training server. Patch Transformers immediately, enforce an approved-sources allowlist for checkpoints, and run all conversion operations in isolated sandboxes.

Is CVE-2025-14927 actively exploited?

No confirmed active exploitation of CVE-2025-14927 has been reported, but organizations should still patch proactively.

How to fix CVE-2025-14927?

1. PATCH: Upgrade huggingface/transformers to the version that addresses ZDI-25-1148 — monitor the official GitHub repo and PyPI for the patched release. 2. ALLOWLIST: Restrict checkpoint sources to verified, internal registries or a curated subset of Hugging Face Hub organizations. Reject conversion of checkpoints from unknown authors. 3. SANDBOX: Run all model conversion and checkpoint loading operations in ephemeral, network-isolated containers with no access to credentials or sensitive data. 4. AUDIT: Review CI/CD pipelines and training scripts for any automated checkpoint downloads and conversions — prioritize those running with elevated cloud permissions. 5. DETECT: Monitor for unexpected subprocess spawning or network calls from Python processes that are performing model loading operations. Alert on convert_config invocations against externally sourced checkpoints. 6. REVIEW: Scan internal model registries for SEW-D checkpoints obtained from external sources before the patch is applied.

What systems are affected by CVE-2025-14927?

This vulnerability affects the following AI/ML architecture patterns: training pipelines, model conversion workflows, MLOps pipelines, model serving, fine-tuning infrastructure.

What is the CVSS score for CVE-2025-14927?

No CVSS score has been assigned yet.

Technical Details

NVD Description

Hugging Face Transformers SEW-D convert_config Code Injection Remote Code Execution Vulnerability. This vulnerability allows remote attackers to execute arbitrary code on affected installations of Hugging Face Transformers. User interaction is required to exploit this vulnerability in that the target must convert a malicious checkpoint. The specific flaw exists within the convert_config function. The issue results from the lack of proper validation of a user-supplied string before using it to execute Python code. An attacker can leverage this vulnerability to execute code in the context of the current user. . Was ZDI-CAN-28252.

Exploitation Scenario

An adversary publishes a weaponized SEW-D model checkpoint to Hugging Face Hub under a plausible researcher or organization account. The checkpoint's configuration contains a crafted string that, when processed by convert_config, is passed unsanitized to a Python code execution path. An ML engineer — or an automated training pipeline — downloads the checkpoint and runs conversion as part of a fine-tuning or evaluation workflow. The injected code executes in the engineer's context: it exfiltrates cloud credentials from environment variables or ~/.aws/credentials, deploys a reverse shell back to attacker infrastructure, or silently poisons the training dataset. Because the attack surface is a trusted developer operation, it bypasses most security controls and goes undetected until significant damage is done.

Weaknesses (CWE)

Timeline

Published
December 23, 2025
Last Modified
January 15, 2026
First Seen
December 23, 2025

Related Vulnerabilities