CVE-2025-14930: transformers: Deserialization enables RCE

UNKNOWN
Published December 23, 2025
CISO Take

CVE-2025-14930 is a critical supply chain RCE in Hugging Face Transformers affecting GLM4 model loading. Any team that loads GLM4 model weights from external sources — including HuggingFace Hub — is exposed to arbitrary code execution with the privileges of the loading process. Immediately audit pipelines that auto-load models and restrict model sources to internally verified artifacts until the Transformers library is patched.

Risk Assessment

Risk is HIGH for organizations using GLM4 models via HuggingFace Transformers. Exploitation requires user interaction (opening/loading a malicious model file), which lowers immediate exploitability but is easily achieved in MLOps pipelines that auto-pull models from registries. The blast radius is the full ML runtime environment: training servers, inference endpoints, CI/CD workers, and developer workstations — all typically run with elevated privileges and network access, amplifying post-exploitation impact.

Affected Systems

Package Ecosystem Vulnerable Range Patched
transformers pip No patch
160.2K OpenSSF 4.9 7.8K dependents Pushed 6d ago 39% patched ~101d to patch Full package profile →

Do you use transformers? You're affected.

Severity & Risk

CVSS 3.1
N/A
EPSS
0.5%
chance of exploitation in 30 days
Higher than 66% of all CVEs
Exploitation Status
No known exploitation
Sophistication
Moderate

Recommended Action

5 steps
  1. PATCH

    Update HuggingFace Transformers to the fixed version as soon as released; monitor ZDI advisory ZDI-25-1145 and HuggingFace GitHub releases for patch confirmation.

  2. BLOCK

    Until patched, restrict model loading to internally-hosted, hash-verified artifacts only. Disable auto-pull from HuggingFace Hub in production and CI/CD.

  3. ISOLATE

    Run model loading processes in sandboxed containers with no outbound network, minimal filesystem write access, and no access to secrets/credentials.

  4. DETECT

    Alert on unexpected child process spawning, network connections, or file writes during model load operations. Monitor for pickle/deserialization execution patterns in ML runtime logs.

  5. AUDIT

    Inventory all GLM4 model files currently in use; verify SHA256 hashes against official source checksums.

CISA SSVC Assessment

Decision Track
Exploitation none
Automatable No
Technical Impact total

Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Art. 15 - Accuracy, robustness and cybersecurity Article 15 - Accuracy, robustness and cybersecurity Article 9 - Risk management system
ISO 42001
8.4 - AI system lifecycle — Supply Chain Management A.6.2 - AI system supply chain management
NIST AI RMF
GOVERN 6.1 - Policies and procedures for third-party AI risk MANAGE 2.2 - Mechanisms to sustain the value of deployed AI systems MAP 5.1 - Likelihood of impacts from third-party components
OWASP LLM Top 10
LLM03:2025 - Supply Chain LLM04:2025 - Data and Model Poisoning

Frequently Asked Questions

What is CVE-2025-14930?

CVE-2025-14930 is a critical supply chain RCE in Hugging Face Transformers affecting GLM4 model loading. Any team that loads GLM4 model weights from external sources — including HuggingFace Hub — is exposed to arbitrary code execution with the privileges of the loading process. Immediately audit pipelines that auto-load models and restrict model sources to internally verified artifacts until the Transformers library is patched.

Is CVE-2025-14930 actively exploited?

No confirmed active exploitation of CVE-2025-14930 has been reported, but organizations should still patch proactively.

How to fix CVE-2025-14930?

1. PATCH: Update HuggingFace Transformers to the fixed version as soon as released; monitor ZDI advisory ZDI-25-1145 and HuggingFace GitHub releases for patch confirmation. 2. BLOCK: Until patched, restrict model loading to internally-hosted, hash-verified artifacts only. Disable auto-pull from HuggingFace Hub in production and CI/CD. 3. ISOLATE: Run model loading processes in sandboxed containers with no outbound network, minimal filesystem write access, and no access to secrets/credentials. 4. DETECT: Alert on unexpected child process spawning, network connections, or file writes during model load operations. Monitor for pickle/deserialization execution patterns in ML runtime logs. 5. AUDIT: Inventory all GLM4 model files currently in use; verify SHA256 hashes against official source checksums.

What systems are affected by CVE-2025-14930?

This vulnerability affects the following AI/ML architecture patterns: model serving, training pipelines, MLOps CI/CD pipelines, RAG pipelines using GLM4 backbone, developer workstations with HuggingFace integrations.

What is the CVSS score for CVE-2025-14930?

No CVSS score has been assigned yet.

Technical Details

NVD Description

Hugging Face Transformers GLM4 Deserialization of Untrusted Data Remote Code Execution Vulnerability. This vulnerability allows remote attackers to execute arbitrary code on affected installations of Hugging Face Transformers. User interaction is required to exploit this vulnerability in that the target must visit a malicious page or open a malicious file. The specific flaw exists within the parsing of weights. The issue results from the lack of proper validation of user-supplied data, which can result in deserialization of untrusted data. An attacker can leverage this vulnerability to execute code in the context of the current process. Was ZDI-CAN-28309.

Exploitation Scenario

An attacker crafts a malicious GLM4 model weight file with a serialized payload embedded using Python's pickle or equivalent deserialization vector. They publish it to HuggingFace Hub under a convincing model name (typosquatting a popular GLM4 checkpoint) or compromise an existing model repository. A victim organization's automated MLOps pipeline — running a nightly job to pull the latest model version — downloads and calls `from_pretrained()`, triggering deserialization and executing the attacker's payload. The payload runs as the pipeline service account, which typically has access to cloud credentials, training data, inference infrastructure, and internal APIs. From there, lateral movement or data exfiltration follows.

Weaknesses (CWE)

Timeline

Published
December 23, 2025
Last Modified
January 21, 2026
First Seen
December 23, 2025

Related Vulnerabilities