CVE-2025-14930: transformers: Deserialization enables RCE
UNKNOWNCVE-2025-14930 is a critical supply chain RCE in Hugging Face Transformers affecting GLM4 model loading. Any team that loads GLM4 model weights from external sources — including HuggingFace Hub — is exposed to arbitrary code execution with the privileges of the loading process. Immediately audit pipelines that auto-load models and restrict model sources to internally verified artifacts until the Transformers library is patched.
Risk Assessment
Risk is HIGH for organizations using GLM4 models via HuggingFace Transformers. Exploitation requires user interaction (opening/loading a malicious model file), which lowers immediate exploitability but is easily achieved in MLOps pipelines that auto-pull models from registries. The blast radius is the full ML runtime environment: training servers, inference endpoints, CI/CD workers, and developer workstations — all typically run with elevated privileges and network access, amplifying post-exploitation impact.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| transformers | pip | — | No patch |
Do you use transformers? You're affected.
Severity & Risk
Recommended Action
5 steps-
PATCH
Update HuggingFace Transformers to the fixed version as soon as released; monitor ZDI advisory ZDI-25-1145 and HuggingFace GitHub releases for patch confirmation.
-
BLOCK
Until patched, restrict model loading to internally-hosted, hash-verified artifacts only. Disable auto-pull from HuggingFace Hub in production and CI/CD.
-
ISOLATE
Run model loading processes in sandboxed containers with no outbound network, minimal filesystem write access, and no access to secrets/credentials.
-
DETECT
Alert on unexpected child process spawning, network connections, or file writes during model load operations. Monitor for pickle/deserialization execution patterns in ML runtime logs.
-
AUDIT
Inventory all GLM4 model files currently in use; verify SHA256 hashes against official source checksums.
CISA SSVC Assessment
Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2025-14930?
CVE-2025-14930 is a critical supply chain RCE in Hugging Face Transformers affecting GLM4 model loading. Any team that loads GLM4 model weights from external sources — including HuggingFace Hub — is exposed to arbitrary code execution with the privileges of the loading process. Immediately audit pipelines that auto-load models and restrict model sources to internally verified artifacts until the Transformers library is patched.
Is CVE-2025-14930 actively exploited?
No confirmed active exploitation of CVE-2025-14930 has been reported, but organizations should still patch proactively.
How to fix CVE-2025-14930?
1. PATCH: Update HuggingFace Transformers to the fixed version as soon as released; monitor ZDI advisory ZDI-25-1145 and HuggingFace GitHub releases for patch confirmation. 2. BLOCK: Until patched, restrict model loading to internally-hosted, hash-verified artifacts only. Disable auto-pull from HuggingFace Hub in production and CI/CD. 3. ISOLATE: Run model loading processes in sandboxed containers with no outbound network, minimal filesystem write access, and no access to secrets/credentials. 4. DETECT: Alert on unexpected child process spawning, network connections, or file writes during model load operations. Monitor for pickle/deserialization execution patterns in ML runtime logs. 5. AUDIT: Inventory all GLM4 model files currently in use; verify SHA256 hashes against official source checksums.
What systems are affected by CVE-2025-14930?
This vulnerability affects the following AI/ML architecture patterns: model serving, training pipelines, MLOps CI/CD pipelines, RAG pipelines using GLM4 backbone, developer workstations with HuggingFace integrations.
What is the CVSS score for CVE-2025-14930?
No CVSS score has been assigned yet.
Technical Details
NVD Description
Hugging Face Transformers GLM4 Deserialization of Untrusted Data Remote Code Execution Vulnerability. This vulnerability allows remote attackers to execute arbitrary code on affected installations of Hugging Face Transformers. User interaction is required to exploit this vulnerability in that the target must visit a malicious page or open a malicious file. The specific flaw exists within the parsing of weights. The issue results from the lack of proper validation of user-supplied data, which can result in deserialization of untrusted data. An attacker can leverage this vulnerability to execute code in the context of the current process. Was ZDI-CAN-28309.
Exploitation Scenario
An attacker crafts a malicious GLM4 model weight file with a serialized payload embedded using Python's pickle or equivalent deserialization vector. They publish it to HuggingFace Hub under a convincing model name (typosquatting a popular GLM4 checkpoint) or compromise an existing model repository. A victim organization's automated MLOps pipeline — running a nightly job to pull the latest model version — downloads and calls `from_pretrained()`, triggering deserialization and executing the attacker's payload. The payload runs as the pipeline service account, which typically has access to cloud credentials, training data, inference infrastructure, and internal APIs. From there, lateral movement or data exfiltration follows.
Weaknesses (CWE)
References
Timeline
Related Vulnerabilities
CVE-2024-3568 9.6 HuggingFace Transformers: RCE via pickle deserialization
Same package: transformers CVE-2024-11393 8.8 Transformers: RCE via MaskFormer model deserialization
Same package: transformers CVE-2023-6730 8.8 HuggingFace Transformers: RCE via unsafe deserialization
Same package: transformers CVE-2024-11392 8.8 HuggingFace Transformers: RCE via config deserialization
Same package: transformers CVE-2024-11394 8.8 Transformers: RCE via Trax model deserialization
Same package: transformers
AI Threat Alert