CVE-2025-14926: transformers: Code Injection enables RCE

UNKNOWN
Published December 23, 2025
CISO Take

CVE-2025-14926 is a code injection flaw in Hugging Face Transformers that allows RCE when converting malicious model checkpoints. Any team that downloads and converts third-party models — a common MLOps practice — is at risk. Immediately audit Transformers usage across your ML pipelines, patch to a fixed version, and enforce a model provenance policy restricting checkpoint sources to verified publishers.

Risk Assessment

High risk for organizations running ML pipelines that ingest external or community-sourced model checkpoints. While user interaction is required (a practitioner must explicitly run convert_config on a malicious checkpoint), this is standard MLOps behavior when fine-tuning or porting models. The HuggingFace Hub's open-publishing model makes distribution of weaponized checkpoints trivial. No CVSS score is yet assigned, but the CWE-94 (Code Injection) primitive resulting in arbitrary RCE places practical severity at Critical for affected workflows. EPSS is unavailable but exploit complexity is moderate given the well-understood nature of unsafe deserialization/eval patterns in Python ML tooling.

Affected Systems

Package Ecosystem Vulnerable Range Patched
transformers pip No patch
160.2K OpenSSF 4.9 7.8K dependents Pushed 6d ago 39% patched ~101d to patch Full package profile →

Do you use transformers? You're affected.

Severity & Risk

CVSS 3.1
N/A
EPSS
0.1%
chance of exploitation in 30 days
Higher than 34% of all CVEs
Exploitation Status
No known exploitation
Sophistication
Trivial

Recommended Action

6 steps
  1. PATCH

    Upgrade Hugging Face Transformers to the patched version once released; monitor https://github.com/huggingface/transformers/security/advisories for the fix.

  2. RESTRICT SOURCES

    Enforce a model allowlist — only load checkpoints from verified organizational accounts or cryptographically signed sources. Block unauthenticated HuggingFace Hub downloads in production pipelines.

  3. SANDBOX

    Run all model conversion and checkpoint loading operations inside isolated containers or VMs with no network access and minimal filesystem permissions.

  4. AUDIT CODE

    Search codebase for calls to convert_config and any eval/exec patterns in Transformers-dependent code.

  5. DETECT

    Alert on unexpected child process spawning from Python ML processes, outbound connections from training/inference hosts, and anomalous file writes in model storage directories.

  6. INTERIM WORKAROUND

    Disable or gate the convert_config workflow until patched; require security review before introducing new model checkpoints into any pipeline.

CISA SSVC Assessment

Decision Track
Exploitation none
Automatable No
Technical Impact total

Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Art. 17 - Quality management system Art. 9 - Risk management system Art.15 - Accuracy, Robustness and Cybersecurity
ISO 42001
A.10.2 - AI system supply chain management A.6.2 - AI risk assessment A.6.2.6 - AI system supply chain security A.9.3 - AI system security controls
NIST AI RMF
GOVERN 1.7 - Processes for tracking AI risks GOVERN 6.2 - Policies and procedures are in place for AI risk management of third-party AI components MANAGE 2.2 - Mechanisms for responding to AI risks
OWASP LLM Top 10
LLM03 - Supply Chain Vulnerabilities LLM03:2025 - Supply Chain Vulnerabilities

Frequently Asked Questions

What is CVE-2025-14926?

CVE-2025-14926 is a code injection flaw in Hugging Face Transformers that allows RCE when converting malicious model checkpoints. Any team that downloads and converts third-party models — a common MLOps practice — is at risk. Immediately audit Transformers usage across your ML pipelines, patch to a fixed version, and enforce a model provenance policy restricting checkpoint sources to verified publishers.

Is CVE-2025-14926 actively exploited?

No confirmed active exploitation of CVE-2025-14926 has been reported, but organizations should still patch proactively.

How to fix CVE-2025-14926?

1. PATCH: Upgrade Hugging Face Transformers to the patched version once released; monitor https://github.com/huggingface/transformers/security/advisories for the fix. 2. RESTRICT SOURCES: Enforce a model allowlist — only load checkpoints from verified organizational accounts or cryptographically signed sources. Block unauthenticated HuggingFace Hub downloads in production pipelines. 3. SANDBOX: Run all model conversion and checkpoint loading operations inside isolated containers or VMs with no network access and minimal filesystem permissions. 4. AUDIT CODE: Search codebase for calls to convert_config and any eval/exec patterns in Transformers-dependent code. 5. DETECT: Alert on unexpected child process spawning from Python ML processes, outbound connections from training/inference hosts, and anomalous file writes in model storage directories. 6. INTERIM WORKAROUND: Disable or gate the convert_config workflow until patched; require security review before introducing new model checkpoints into any pipeline.

What systems are affected by CVE-2025-14926?

This vulnerability affects the following AI/ML architecture patterns: training pipelines, model fine-tuning workflows, MLOps model conversion utilities, model serving (pre-deployment conversion stage), CI/CD pipelines with automated model validation.

What is the CVSS score for CVE-2025-14926?

No CVSS score has been assigned yet.

Technical Details

NVD Description

Hugging Face Transformers SEW convert_config Code Injection Remote Code Execution Vulnerability. This vulnerability allows remote attackers to execute arbitrary code on affected installations of Hugging Face Transformers. User interaction is required to exploit this vulnerability in that the target must convert a malicious checkpoint. The specific flaw exists within the convert_config function. The issue results from the lack of proper validation of a user-supplied string before using it to execute Python code. An attacker can leverage this vulnerability to execute code in the context of the current user. Was ZDI-CAN-28251.

Exploitation Scenario

An adversary creates a Hugging Face account and publishes a SEW model checkpoint with a maliciously crafted configuration file. The config contains a payload exploiting the lack of input validation in convert_config — for example, a string value that is passed directly to exec() or eval() in the Python runtime. The attacker promotes the model on social media, AI forums, or submits it as a dependency update to an open-source project. A data scientist or automated MLOps pipeline downloads and calls convert_config on the checkpoint during model evaluation or fine-tuning preparation. Arbitrary code executes with the privileges of the ML process — potentially exfiltrating API keys from environment variables, pivoting to cloud storage containing proprietary training data, or establishing persistence on GPU training infrastructure.

Weaknesses (CWE)

Timeline

Published
December 23, 2025
Last Modified
January 15, 2026
First Seen
December 23, 2025

Related Vulnerabilities