CVE-2025-6638: HuggingFace Transformers: ReDoS in MarianTokenizer
GHSA-59p9-h35m-wg4g HIGH PoC AVAILABLE CISA: TRACK*Upgrade Hugging Face Transformers to 4.53.0 immediately if your ML stack includes multilingual translation pipelines using MarianMT models. Any internet-facing service that passes untrusted text through MarianTokenizer is vulnerable to CPU exhaustion attacks with no authentication required. The fix is a one-line pip upgrade with no breaking changes.
Risk Assessment
Moderate operational risk despite the high CVSS (7.5). EPSS of 0.00032 signals near-zero observed exploitation in the wild, and the vulnerability is limited to availability impact only — no data exfiltration or code execution possible. However, the attack is trivially executable (no auth, no prior access, low complexity) against any exposed translation endpoint, making it a viable DoS vector for motivated adversaries targeting AI-powered services. Risk is elevated for SaaS platforms that expose multilingual NLP APIs publicly.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| transformers | pip | — | No patch |
| transformers | pip | < 4.53.0 | 4.53.0 |
Severity & Risk
Attack Surface
Recommended Action
5 steps-
PATCH (immediate): pip install 'transformers>=4.53.0' — the fix is available and non-breaking.
-
VERIFY
Run 'pip show transformers | grep Version' across all inference nodes, CI/CD workers, and training environments. Container images built before 4.53.0 release need rebuilding.
-
WORKAROUND (if patching is delayed): Enforce input length limits upstream (e.g., max 512 chars) before text reaches the tokenizer; validate that input does not contain malformed Unicode or excessive special-character sequences.
-
DETECT
Monitor CPU spike patterns on translation endpoints; anomalous sustained CPU usage from a single source IP is the primary indicator.
-
SCOPE CHECK
grep your codebase for 'MarianTokenizer' to identify all usage points.
CISA SSVC Assessment
Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2025-6638?
Upgrade Hugging Face Transformers to 4.53.0 immediately if your ML stack includes multilingual translation pipelines using MarianMT models. Any internet-facing service that passes untrusted text through MarianTokenizer is vulnerable to CPU exhaustion attacks with no authentication required. The fix is a one-line pip upgrade with no breaking changes.
Is CVE-2025-6638 actively exploited?
Proof-of-concept exploit code is publicly available for CVE-2025-6638, increasing the risk of exploitation.
How to fix CVE-2025-6638?
1. PATCH (immediate): pip install 'transformers>=4.53.0' — the fix is available and non-breaking. 2. VERIFY: Run 'pip show transformers | grep Version' across all inference nodes, CI/CD workers, and training environments. Container images built before 4.53.0 release need rebuilding. 3. WORKAROUND (if patching is delayed): Enforce input length limits upstream (e.g., max 512 chars) before text reaches the tokenizer; validate that input does not contain malformed Unicode or excessive special-character sequences. 4. DETECT: Monitor CPU spike patterns on translation endpoints; anomalous sustained CPU usage from a single source IP is the primary indicator. 5. SCOPE CHECK: grep your codebase for 'MarianTokenizer' to identify all usage points.
What systems are affected by CVE-2025-6638?
This vulnerability affects the following AI/ML architecture patterns: NLP translation pipelines, model serving, multilingual RAG ingestion, document processing pipelines, batch training pipelines.
What is the CVSS score for CVE-2025-6638?
CVE-2025-6638 has a CVSS v3.1 base score of 7.5 (HIGH). The EPSS exploitation probability is 0.03%.
Technical Details
NVD Description
A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically affecting the MarianTokenizer's `remove_language_code()` method. This vulnerability is present in version 4.52.4 and has been fixed in version 4.53.0. The issue arises from inefficient regex processing, which can be exploited by crafted input strings containing malformed language code patterns, leading to excessive CPU consumption and potential denial of service.
Exploitation Scenario
An adversary identifies a public-facing translation API or document ingestion endpoint powered by HuggingFace Transformers. Using a fuzzing tool or manual crafting, they construct strings with malformed language code patterns — sequences that trigger catastrophic backtracking in the 'remove_language_code()' regex. The adversary sends a modest volume of these payloads concurrently (no flood required — each request saturates a CPU thread). Within seconds, the inference server's CPU reaches 100%, blocking all legitimate requests. For containerized deployments without CPU limits, this can cascade to affect co-located services. No credentials, API keys, or prior knowledge of the model are required.
Weaknesses (CWE)
CVSS Vector
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H References
- github.com/advisories/GHSA-59p9-h35m-wg4g
- github.com/huggingface/transformers/commit/d37f7517972f67e3f2194c000ed0f87f064e5099
- nvd.nist.gov/vuln/detail/CVE-2025-6638
- github.com/huggingface/transformers/commit/47c34fba5c303576560cb29767efb452ff12b8be Patch
- huntr.com/bounties/6a6c933f-9ce8-4ded-8b3b-2c1444c61f36 Exploit 3rd Party
- github.com/ARPSyndicate/cve-scores Exploit
Timeline
Related Vulnerabilities
CVE-2024-3568 9.6 HuggingFace Transformers: RCE via pickle deserialization
Same package: transformers CVE-2024-11393 8.8 Transformers: RCE via MaskFormer model deserialization
Same package: transformers CVE-2023-6730 8.8 HuggingFace Transformers: RCE via unsafe deserialization
Same package: transformers CVE-2024-11392 8.8 HuggingFace Transformers: RCE via config deserialization
Same package: transformers CVE-2024-11394 8.8 Transformers: RCE via Trax model deserialization
Same package: transformers
AI Threat Alert