CVE-2025-6051: Transformers: ReDoS in EnglishNormalizer exhausts CPU

GHSA-rcv9-qm8p-9p6j MEDIUM PoC AVAILABLE CISA: TRACK*
Published September 14, 2025
CISO Take

Upgrade Hugging Face Transformers to 4.53.0 immediately if you run any TTS or text normalization pipelines that accept external input. This ReDoS is trivially exploitable with no authentication — a single crafted string of digits can peg a CPU core and degrade or kill your inference service. The EPSS score is near zero today, but the attack pattern is trivial enough that exploitation could spike once PoC details spread from the Huntr disclosure.

Risk Assessment

Effective risk is moderate-to-high for externally-exposed inference APIs despite the CVSS 5.3 rating. The CVSS vector (AV:N/AC:L/PR:N/UI:N) means any unauthenticated remote attacker can trigger it. Impact is purely availability (no confidentiality or integrity loss), but in production AI deployments, a sustained CPU exhaustion attack can translate to SLA breaches, autoscaling cost spikes, and cascading failures in multi-model pipelines. Low EPSS (0.00034) reflects limited active exploitation today, but the vulnerability is in one of the most-deployed ML libraries globally.

Affected Systems

Package Ecosystem Vulnerable Range Patched
transformers pip No patch
160.2K OpenSSF 4.9 7.8K dependents Pushed 6d ago 39% patched ~101d to patch Full package profile →
transformers pip < 4.53.0 4.53.0
160.2K OpenSSF 4.9 7.8K dependents Pushed 6d ago 39% patched ~101d to patch Full package profile →

Severity & Risk

CVSS 3.1
5.3 / 10
EPSS
0.0%
chance of exploitation in 30 days
Higher than 10% of all CVEs
Exploitation Status
Exploit Available
Exploitation: MEDIUM
Sophistication
Trivial
Exploitation Confidence
medium
CISA SSVC: Public PoC
Public PoC indexed (trickest/cve)
Composite signal derived from CISA KEV, CISA SSVC, EPSS, trickest/cve, and Nuclei templates.

Attack Surface

AV AC PR UI S C I A
AV Network
AC Low
PR None
UI None
S Unchanged
C None
I None
A Low

Recommended Action

5 steps
  1. PATCH

    Upgrade transformers to >= 4.53.0 (pip install --upgrade transformers). Verify with pip show transformers.

  2. WORKAROUND (if immediate patching is blocked): Add input validation upstream — reject or truncate strings exceeding a reasonable digit-run length (e.g., reject inputs with consecutive digit sequences > 50 chars).

  3. DETECTION

    Alert on CPU utilization spikes in inference workers correlated with text-processing requests; set process-level CPU limits (cgroups/K8s resource limits) to prevent one request from monopolizing the node.

  4. INVENTORY

    Audit all services importing transformers and identify which expose text input to the EnglishNormalizer code path.

  5. VERIFY

    Check the commit at ba8eaba9865618253f997784aa565b96206426f0 for the exact regex fix to understand the vulnerable pattern.

CISA SSVC Assessment

Decision Track*
Exploitation poc
Automatable No
Technical Impact partial

Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Article 15 - Accuracy, robustness and cybersecurity
ISO 42001
A.6.2.6 - AI system availability and resilience
NIST AI RMF
MANAGE-2.2 - Mechanisms to sustain value of deployed AI
OWASP LLM Top 10
LLM10:2025 - Unbounded Consumption

Frequently Asked Questions

What is CVE-2025-6051?

Upgrade Hugging Face Transformers to 4.53.0 immediately if you run any TTS or text normalization pipelines that accept external input. This ReDoS is trivially exploitable with no authentication — a single crafted string of digits can peg a CPU core and degrade or kill your inference service. The EPSS score is near zero today, but the attack pattern is trivial enough that exploitation could spike once PoC details spread from the Huntr disclosure.

Is CVE-2025-6051 actively exploited?

Proof-of-concept exploit code is publicly available for CVE-2025-6051, increasing the risk of exploitation.

How to fix CVE-2025-6051?

1. PATCH: Upgrade transformers to >= 4.53.0 (pip install --upgrade transformers). Verify with pip show transformers. 2. WORKAROUND (if immediate patching is blocked): Add input validation upstream — reject or truncate strings exceeding a reasonable digit-run length (e.g., reject inputs with consecutive digit sequences > 50 chars). 3. DETECTION: Alert on CPU utilization spikes in inference workers correlated with text-processing requests; set process-level CPU limits (cgroups/K8s resource limits) to prevent one request from monopolizing the node. 4. INVENTORY: Audit all services importing transformers and identify which expose text input to the EnglishNormalizer code path. 5. VERIFY: Check the commit at ba8eaba9865618253f997784aa565b96206426f0 for the exact regex fix to understand the vulnerable pattern.

What systems are affected by CVE-2025-6051?

This vulnerability affects the following AI/ML architecture patterns: TTS inference pipelines, NLP text normalization services, model serving, batch enrichment pipelines, training data preprocessing.

What is the CVSS score for CVE-2025-6051?

CVE-2025-6051 has a CVSS v3.1 base score of 5.3 (MEDIUM). The EPSS exploitation probability is 0.03%.

Technical Details

NVD Description

A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically within the `normalize_numbers()` method of the `EnglishNormalizer` class. This vulnerability affects versions up to 4.52.4 and is fixed in version 4.53.0. The issue arises from the method's handling of numeric strings, which can be exploited using crafted input strings containing long sequences of digits, leading to excessive CPU consumption. This vulnerability impacts text-to-speech and number normalization tasks, potentially causing service disruption, resource exhaustion, and API vulnerabilities.

Exploitation Scenario

An adversary identifies a public-facing TTS or text-normalization API endpoint powered by Hugging Face Transformers (discoverable via response headers, error messages, or job listings). They craft a POST request with a body containing a long numeric string (e.g., a 50,000-digit number). The normalize_numbers() regex enters catastrophic backtracking, consuming 100% of a CPU core for tens of seconds per request. By sending a low-rate stream of such requests (avoiding traditional rate-limit thresholds), the attacker degrades service quality for all users. In a single-threaded or worker-pool model, this effectively takes the endpoint offline. No credentials, no prior access, no AI/ML knowledge required.

CVSS Vector

CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:L

Timeline

Published
September 14, 2025
Last Modified
October 21, 2025
First Seen
September 14, 2025

Related Vulnerabilities