CVE-2025-6638: HuggingFace Transformers: ReDoS in MarianTokenizer

GHSA-59p9-h35m-wg4g HIGH PoC AVAILABLE CISA: TRACK*
Published September 12, 2025
CISO Take

Upgrade Hugging Face Transformers to 4.53.0 immediately if your ML stack includes multilingual translation pipelines using MarianMT models. Any internet-facing service that passes untrusted text through MarianTokenizer is vulnerable to CPU exhaustion attacks with no authentication required. The fix is a one-line pip upgrade with no breaking changes.

Risk Assessment

Moderate operational risk despite the high CVSS (7.5). EPSS of 0.00032 signals near-zero observed exploitation in the wild, and the vulnerability is limited to availability impact only — no data exfiltration or code execution possible. However, the attack is trivially executable (no auth, no prior access, low complexity) against any exposed translation endpoint, making it a viable DoS vector for motivated adversaries targeting AI-powered services. Risk is elevated for SaaS platforms that expose multilingual NLP APIs publicly.

Affected Systems

Package Ecosystem Vulnerable Range Patched
transformers pip No patch
160.2K OpenSSF 4.9 7.8K dependents Pushed 6d ago 39% patched ~101d to patch Full package profile →
transformers pip < 4.53.0 4.53.0
160.2K OpenSSF 4.9 7.8K dependents Pushed 6d ago 39% patched ~101d to patch Full package profile →

Severity & Risk

CVSS 3.1
7.5 / 10
EPSS
0.0%
chance of exploitation in 30 days
Higher than 10% of all CVEs
Exploitation Status
Exploit Available
Exploitation: MEDIUM
Sophistication
Trivial
Exploitation Confidence
medium
CISA SSVC: Public PoC
Public PoC indexed (trickest/cve)
Composite signal derived from CISA KEV, CISA SSVC, EPSS, trickest/cve, and Nuclei templates.

Attack Surface

AV AC PR UI S C I A
AV Network
AC Low
PR None
UI None
S Unchanged
C None
I None
A High

Recommended Action

5 steps
  1. PATCH (immediate): pip install 'transformers>=4.53.0' — the fix is available and non-breaking.

  2. VERIFY

    Run 'pip show transformers | grep Version' across all inference nodes, CI/CD workers, and training environments. Container images built before 4.53.0 release need rebuilding.

  3. WORKAROUND (if patching is delayed): Enforce input length limits upstream (e.g., max 512 chars) before text reaches the tokenizer; validate that input does not contain malformed Unicode or excessive special-character sequences.

  4. DETECT

    Monitor CPU spike patterns on translation endpoints; anomalous sustained CPU usage from a single source IP is the primary indicator.

  5. SCOPE CHECK

    grep your codebase for 'MarianTokenizer' to identify all usage points.

CISA SSVC Assessment

Decision Track*
Exploitation poc
Automatable Yes
Technical Impact partial

Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Article 9 - Risk management system
ISO 42001
6.1.2 - AI risk assessment 8.4 - AI system operation
NIST AI RMF
GOVERN-1.7 - Processes for AI risk from third-party dependencies MANAGE-2.4 - Residual risks are monitored and managed
OWASP LLM Top 10
LLM05 - Supply Chain Vulnerabilities

Frequently Asked Questions

What is CVE-2025-6638?

Upgrade Hugging Face Transformers to 4.53.0 immediately if your ML stack includes multilingual translation pipelines using MarianMT models. Any internet-facing service that passes untrusted text through MarianTokenizer is vulnerable to CPU exhaustion attacks with no authentication required. The fix is a one-line pip upgrade with no breaking changes.

Is CVE-2025-6638 actively exploited?

Proof-of-concept exploit code is publicly available for CVE-2025-6638, increasing the risk of exploitation.

How to fix CVE-2025-6638?

1. PATCH (immediate): pip install 'transformers>=4.53.0' — the fix is available and non-breaking. 2. VERIFY: Run 'pip show transformers | grep Version' across all inference nodes, CI/CD workers, and training environments. Container images built before 4.53.0 release need rebuilding. 3. WORKAROUND (if patching is delayed): Enforce input length limits upstream (e.g., max 512 chars) before text reaches the tokenizer; validate that input does not contain malformed Unicode or excessive special-character sequences. 4. DETECT: Monitor CPU spike patterns on translation endpoints; anomalous sustained CPU usage from a single source IP is the primary indicator. 5. SCOPE CHECK: grep your codebase for 'MarianTokenizer' to identify all usage points.

What systems are affected by CVE-2025-6638?

This vulnerability affects the following AI/ML architecture patterns: NLP translation pipelines, model serving, multilingual RAG ingestion, document processing pipelines, batch training pipelines.

What is the CVSS score for CVE-2025-6638?

CVE-2025-6638 has a CVSS v3.1 base score of 7.5 (HIGH). The EPSS exploitation probability is 0.03%.

Technical Details

NVD Description

A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically affecting the MarianTokenizer's `remove_language_code()` method. This vulnerability is present in version 4.52.4 and has been fixed in version 4.53.0. The issue arises from inefficient regex processing, which can be exploited by crafted input strings containing malformed language code patterns, leading to excessive CPU consumption and potential denial of service.

Exploitation Scenario

An adversary identifies a public-facing translation API or document ingestion endpoint powered by HuggingFace Transformers. Using a fuzzing tool or manual crafting, they construct strings with malformed language code patterns — sequences that trigger catastrophic backtracking in the 'remove_language_code()' regex. The adversary sends a modest volume of these payloads concurrently (no flood required — each request saturates a CPU thread). Within seconds, the inference server's CPU reaches 100%, blocking all legitimate requests. For containerized deployments without CPU limits, this can cascade to affect co-located services. No credentials, API keys, or prior knowledge of the model are required.

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H

Timeline

Published
September 12, 2025
Last Modified
October 21, 2025
First Seen
September 12, 2025

Related Vulnerabilities