CVE-2025-6921: Transformers: ReDoS in optimizer halts training pipelines

GHSA-4w7r-h757-3r74 HIGH PoC AVAILABLE CISA: TRACK*
Published September 23, 2025
CISO Take

Any ML platform exposing fine-tuning or training configuration to external users (SaaS fine-tuning APIs, MLOps platforms) is at risk if attackers can supply weight decay regex patterns. The vulnerability causes 100% CPU utilization with no authentication required per the CVSS vector. Patch immediately to transformers 4.53.0 and audit any interface that accepts optimizer configuration from untrusted inputs.

Risk Assessment

CVSS 7.5 High but EPSS 0.00032 indicates very low observed exploitation in the wild. Actual organizational risk is bifurcated: low for closed training environments where only authorized ML engineers control optimizer configs, and high for cloud-based fine-tuning services, AutoML platforms, or any API where external users can supply training hyperparameters. The network-exploitable, no-auth CVSS vector reflects the worst-case scenario where optimizer configs are exposed via API—organizations must assess whether their deployment matches that threat model.

Affected Systems

Package Ecosystem Vulnerable Range Patched
transformers pip No patch
160.4K OpenSSF 4.9 7.8K dependents Pushed yesterday 39% patched ~101d to patch Full package profile →
transformers pip < 4.53.0 4.53.0
160.4K OpenSSF 4.9 7.8K dependents Pushed yesterday 39% patched ~101d to patch Full package profile →

Severity & Risk

CVSS 3.1
7.5 / 10
EPSS
0.0%
chance of exploitation in 30 days
Higher than 10% of all CVEs
Exploitation Status
Exploit Available
Exploitation: MEDIUM
Sophistication
Trivial
Exploitation Confidence
medium
CISA SSVC: Public PoC
Public PoC indexed (trickest/cve)
Composite signal derived from CISA KEV, CISA SSVC, EPSS, trickest/cve, and Nuclei templates.

Attack Surface

AV AC PR UI S C I A
AV Network
AC Low
PR None
UI None
S Unchanged
C None
I None
A High

Recommended Action

5 steps
  1. PATCH

    Upgrade transformers to >= 4.53.0 immediately (patch commit 47c34fb).

  2. AUDIT

    Inventory all services that accept optimizer configuration (include_in_weight_decay, exclude_from_weight_decay) from external or untrusted inputs.

  3. VALIDATE

    If patching is not immediately feasible, apply input validation—reject any regex pattern containing catastrophic backtracking constructs (nested quantifiers, alternation with overlap).

  4. ISOLATE

    Run training jobs in resource-constrained containers with CPU quotas to limit blast radius.

  5. DETECT

    Alert on training jobs with >95% CPU utilization persisting beyond expected warmup phase.

CISA SSVC Assessment

Decision Track*
Exploitation poc
Automatable Yes
Technical Impact partial

Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Article 9 - Risk management system
ISO 42001
6.1.2 - AI risk treatment 8.4 - AI system operation
NIST AI RMF
MANAGE-2.2 - Mechanisms to sustain the value of deployed AI MAP-5.2 - Practices and personnel for AI risk management
OWASP LLM Top 10
LLM04 - Model Denial of Service

Frequently Asked Questions

What is CVE-2025-6921?

Any ML platform exposing fine-tuning or training configuration to external users (SaaS fine-tuning APIs, MLOps platforms) is at risk if attackers can supply weight decay regex patterns. The vulnerability causes 100% CPU utilization with no authentication required per the CVSS vector. Patch immediately to transformers 4.53.0 and audit any interface that accepts optimizer configuration from untrusted inputs.

Is CVE-2025-6921 actively exploited?

Proof-of-concept exploit code is publicly available for CVE-2025-6921, increasing the risk of exploitation.

How to fix CVE-2025-6921?

1. PATCH: Upgrade transformers to >= 4.53.0 immediately (patch commit 47c34fb). 2. AUDIT: Inventory all services that accept optimizer configuration (include_in_weight_decay, exclude_from_weight_decay) from external or untrusted inputs. 3. VALIDATE: If patching is not immediately feasible, apply input validation—reject any regex pattern containing catastrophic backtracking constructs (nested quantifiers, alternation with overlap). 4. ISOLATE: Run training jobs in resource-constrained containers with CPU quotas to limit blast radius. 5. DETECT: Alert on training jobs with >95% CPU utilization persisting beyond expected warmup phase.

What systems are affected by CVE-2025-6921?

This vulnerability affects the following AI/ML architecture patterns: training pipelines, fine-tuning workflows, MLOps platforms, model serving with online fine-tuning, AutoML services.

What is the CVSS score for CVE-2025-6921?

CVE-2025-6921 has a CVSS v3.1 base score of 7.5 (HIGH). The EPSS exploitation probability is 0.03%.

Technical Details

NVD Description

The huggingface/transformers library, versions prior to 4.53.0, is vulnerable to Regular Expression Denial of Service (ReDoS) in the AdamWeightDecay optimizer. The vulnerability arises from the _do_use_weight_decay method, which processes user-controlled regular expressions in the include_in_weight_decay and exclude_from_weight_decay lists. Malicious regular expressions can cause catastrophic backtracking during the re.search call, leading to 100% CPU utilization and a denial of service. This issue can be exploited by attackers who can control the patterns in these lists, potentially causing the machine learning task to hang and rendering services unresponsive.

Exploitation Scenario

An attacker with access to a fine-tuning API or MLOps platform sends a training job request with a malicious regex pattern such as '(a+)+$' in the include_in_weight_decay parameter. When the AdamWeightDecay optimizer's _do_use_weight_decay method calls re.search() against parameter names, catastrophic backtracking triggers and pegs the training process at 100% CPU. In a shared GPU cluster, this hangs all co-located training jobs. In a pay-per-use fine-tuning SaaS, it drives up compute costs and blocks legitimate customers—achieving DoS without any authentication.

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H

Timeline

Published
September 23, 2025
Last Modified
October 10, 2025
First Seen
September 23, 2025

Related Vulnerabilities