CVE-2025-6921: Transformers: ReDoS in optimizer halts training pipelines
GHSA-4w7r-h757-3r74 HIGH PoC AVAILABLE CISA: TRACK*Any ML platform exposing fine-tuning or training configuration to external users (SaaS fine-tuning APIs, MLOps platforms) is at risk if attackers can supply weight decay regex patterns. The vulnerability causes 100% CPU utilization with no authentication required per the CVSS vector. Patch immediately to transformers 4.53.0 and audit any interface that accepts optimizer configuration from untrusted inputs.
Risk Assessment
CVSS 7.5 High but EPSS 0.00032 indicates very low observed exploitation in the wild. Actual organizational risk is bifurcated: low for closed training environments where only authorized ML engineers control optimizer configs, and high for cloud-based fine-tuning services, AutoML platforms, or any API where external users can supply training hyperparameters. The network-exploitable, no-auth CVSS vector reflects the worst-case scenario where optimizer configs are exposed via API—organizations must assess whether their deployment matches that threat model.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| transformers | pip | — | No patch |
| 160.4K
OpenSSF 4.9 7.8K dependents
Pushed yesterday 39% patched
~101d to patch
Full package profile →
| |||
| transformers | pip | < 4.53.0 | 4.53.0 |
| 160.4K
OpenSSF 4.9 7.8K dependents
Pushed yesterday 39% patched
~101d to patch
Full package profile →
| |||
Severity & Risk
Attack Surface
Recommended Action
5 steps-
PATCH
Upgrade transformers to >= 4.53.0 immediately (patch commit 47c34fb).
-
AUDIT
Inventory all services that accept optimizer configuration (include_in_weight_decay, exclude_from_weight_decay) from external or untrusted inputs.
-
VALIDATE
If patching is not immediately feasible, apply input validation—reject any regex pattern containing catastrophic backtracking constructs (nested quantifiers, alternation with overlap).
-
ISOLATE
Run training jobs in resource-constrained containers with CPU quotas to limit blast radius.
-
DETECT
Alert on training jobs with >95% CPU utilization persisting beyond expected warmup phase.
CISA SSVC Assessment
Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2025-6921?
Any ML platform exposing fine-tuning or training configuration to external users (SaaS fine-tuning APIs, MLOps platforms) is at risk if attackers can supply weight decay regex patterns. The vulnerability causes 100% CPU utilization with no authentication required per the CVSS vector. Patch immediately to transformers 4.53.0 and audit any interface that accepts optimizer configuration from untrusted inputs.
Is CVE-2025-6921 actively exploited?
Proof-of-concept exploit code is publicly available for CVE-2025-6921, increasing the risk of exploitation.
How to fix CVE-2025-6921?
1. PATCH: Upgrade transformers to >= 4.53.0 immediately (patch commit 47c34fb). 2. AUDIT: Inventory all services that accept optimizer configuration (include_in_weight_decay, exclude_from_weight_decay) from external or untrusted inputs. 3. VALIDATE: If patching is not immediately feasible, apply input validation—reject any regex pattern containing catastrophic backtracking constructs (nested quantifiers, alternation with overlap). 4. ISOLATE: Run training jobs in resource-constrained containers with CPU quotas to limit blast radius. 5. DETECT: Alert on training jobs with >95% CPU utilization persisting beyond expected warmup phase.
What systems are affected by CVE-2025-6921?
This vulnerability affects the following AI/ML architecture patterns: training pipelines, fine-tuning workflows, MLOps platforms, model serving with online fine-tuning, AutoML services.
What is the CVSS score for CVE-2025-6921?
CVE-2025-6921 has a CVSS v3.1 base score of 7.5 (HIGH). The EPSS exploitation probability is 0.03%.
Technical Details
NVD Description
The huggingface/transformers library, versions prior to 4.53.0, is vulnerable to Regular Expression Denial of Service (ReDoS) in the AdamWeightDecay optimizer. The vulnerability arises from the _do_use_weight_decay method, which processes user-controlled regular expressions in the include_in_weight_decay and exclude_from_weight_decay lists. Malicious regular expressions can cause catastrophic backtracking during the re.search call, leading to 100% CPU utilization and a denial of service. This issue can be exploited by attackers who can control the patterns in these lists, potentially causing the machine learning task to hang and rendering services unresponsive.
Exploitation Scenario
An attacker with access to a fine-tuning API or MLOps platform sends a training job request with a malicious regex pattern such as '(a+)+$' in the include_in_weight_decay parameter. When the AdamWeightDecay optimizer's _do_use_weight_decay method calls re.search() against parameter names, catastrophic backtracking triggers and pegs the training process at 100% CPU. In a shared GPU cluster, this hangs all co-located training jobs. In a pay-per-use fine-tuning SaaS, it drives up compute costs and blocks legitimate customers—achieving DoS without any authentication.
Weaknesses (CWE)
CVSS Vector
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H References
- github.com/advisories/GHSA-4w7r-h757-3r74
- github.com/huggingface/transformers/commit/d37f7517972f67e3f2194c000ed0f87f064e5099
- nvd.nist.gov/vuln/detail/CVE-2025-6921
- github.com/huggingface/transformers/commit/47c34fba5c303576560cb29767efb452ff12b8be Patch
- huntr.com/bounties/287d15a7-6e7c-45d2-8c05-11e305776f1f Exploit 3rd Party
- github.com/ARPSyndicate/cve-scores Exploit
Timeline
Related Vulnerabilities
CVE-2024-3568 9.6 HuggingFace Transformers: RCE via pickle deserialization
Same package: transformers CVE-2024-11393 8.8 Transformers: RCE via MaskFormer model deserialization
Same package: transformers CVE-2024-11392 8.8 HuggingFace Transformers: RCE via config deserialization
Same package: transformers CVE-2023-6730 8.8 HuggingFace Transformers: RCE via unsafe deserialization
Same package: transformers CVE-2024-11394 8.8 Transformers: RCE via Trax model deserialization
Same package: transformers
AI Threat Alert