CVE-2025-14928

UNKNOWN
Published December 23, 2025
CISO Take

CVE-2025-14928 is a code injection RCE in Hugging Face Transformers' HuBERT convert_config function — a malicious model checkpoint triggers arbitrary Python execution on the converting machine. If your ML teams pull and convert external HuBERT checkpoints, treat this as a critical supply chain risk: isolate conversion workflows immediately and block untrusted checkpoint sources until a patch is confirmed. This is the exact attack pattern attackers use to own ML infrastructure via 'innocent' model downloads.

Affected Systems

Package Ecosystem Vulnerable Range Patched
transformers pip No patch

Do you use transformers? You're affected.

Severity & Risk

CVSS 3.1
N/A
EPSS
N/A
KEV Status
Not in KEV
Sophistication
Moderate

Recommended Action

  1. 1. IMMEDIATE: Audit all pipelines using transformers' HuBERT convert_config — inventory where external checkpoints are consumed. 2. Block or quarantine conversion of HuBERT checkpoints from untrusted sources (anything outside your own model registry). 3. Run any checkpoint conversion in ephemeral, network-isolated containers with no access to production credentials or secrets. 4. Monitor for unexpected subprocess spawning or network connections originating from Python ML processes during model conversion. 5. Subscribe to transformers release notes (GitHub Advisory Database) for patch availability — no fixed version is confirmed yet per CVE data. 6. Apply least-privilege to ML pipeline service accounts so exploitation blast radius is contained. 7. Detection: alert on eval()/exec() calls in transformers processes via runtime security tools (Falco, eBPF-based) if feasible.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Article 15 - Accuracy, robustness and cybersecurity Article 17 - Quality Management System — Third Party Components Article 9 - Risk Management System
ISO 42001
6.1.2 - AI risk assessment 8.4 - AI system operation and monitoring A.6.1.6 - AI system supply chain management A.9.4 - AI system security
NIST AI RMF
GOVERN 1.6 - Policies and procedures for AI risk and supply chain MANAGE 2.2 - Mechanisms to sustain the value of deployed AI MAP 2.1 - Scientific findings and AI supply chain risks
OWASP LLM Top 10
LLM03:2025 - Supply Chain LLM05 - Supply Chain Vulnerabilities

Technical Details

NVD Description

Hugging Face Transformers HuBERT convert_config Code Injection Remote Code Execution Vulnerability. This vulnerability allows remote attackers to execute arbitrary code on affected installations of Hugging Face Transformers. User interaction is required to exploit this vulnerability in that the target must convert a malicious checkpoint. The specific flaw exists within the convert_config function. The issue results from the lack of proper validation of a user-supplied string before using it to execute Python code. An attacker can leverage this vulnerability to execute code in the context of the current user. Was ZDI-CAN-28253.

Exploitation Scenario

Attacker publishes a crafted HuBERT model to HuggingFace Hub with a poisoned config embedding Python payload (e.g., reverse shell or credential harvester) in a field consumed by convert_config. They promote the model through legitimate-looking channels — a GitHub repo, a paper citation, or a Slack message to an ML team. An ML engineer or automated MLOps job pulls the checkpoint and runs the conversion step. convert_config evaluates the malicious string as Python code, executing the payload with the privileges of the converting process. In a typical MLOps environment this yields access to AWS/GCP credentials in environment variables, internal artifact stores, and training compute — all from a single model download.

Weaknesses (CWE)

Timeline

Published
December 23, 2025
Last Modified
January 21, 2026
First Seen
December 23, 2025