CVE-2025-2099: transformers: ReDoS in testing_utils causes DoS
GHSA-qq3j-4f4f-9583 HIGH PoC AVAILABLE CISA: TRACK*HuggingFace Transformers < 4.50.0 contains a ReDoS flaw in its testing utility module that can pin a Python worker at 100% CPU via a crafted newline payload—no authentication required. While the vulnerable function lives in testing infrastructure, any pipeline or service passing untrusted input through preprocess_string() is exposed. Patch to transformers 4.50.0 immediately; no effective workaround exists for the root cause.
Risk Assessment
CVSS 7.5 High with a network-accessible, zero-auth attack vector, but practical risk is moderated by deployment context: the vulnerable function resides in transformers.testing_utils, limiting direct production exposure. EPSS of 0.00092 reflects very low active exploitation probability today. Risk elevates significantly in shared ML platforms, cloud notebook environments, CI/CD pipelines that process external code contributions, or any service that imports testing_utils outside a test harness. Cost-per-compute environments face both availability and financial impact from sustained CPU exhaustion.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| transformers | pip | — | No patch |
| 160.4K
OpenSSF 4.9 7.9K dependents
Pushed yesterday 39% patched
~101d to patch
Full package profile →
| |||
| transformers | pip | < 4.50.0 | 4.50.0 |
| 160.4K
OpenSSF 4.9 7.9K dependents
Pushed yesterday 39% patched
~101d to patch
Full package profile →
| |||
Severity & Risk
Attack Surface
Recommended Action
5 steps-
PATCH
Upgrade immediately — pip install --upgrade 'transformers>=4.50.0'. Verify with: pip show transformers | grep Version.
-
AUDIT
Identify any production service or pipeline importing transformers.testing_utils and prioritize those for emergency patching.
-
WORKAROUND (if patching is blocked): Enforce input length limits upstream — reject or truncate inputs exceeding 1,000 characters before they reach preprocess_string().
-
DETECT
Alert on Python worker processes sustaining >80% CPU for >30 seconds on ML infrastructure.
-
DEPENDENCY SCAN
Run pip-audit or safety check against your transformers dependency tree across all environments (dev, staging, prod, CI).
CISA SSVC Assessment
Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2025-2099?
HuggingFace Transformers < 4.50.0 contains a ReDoS flaw in its testing utility module that can pin a Python worker at 100% CPU via a crafted newline payload—no authentication required. While the vulnerable function lives in testing infrastructure, any pipeline or service passing untrusted input through preprocess_string() is exposed. Patch to transformers 4.50.0 immediately; no effective workaround exists for the root cause.
Is CVE-2025-2099 actively exploited?
Proof-of-concept exploit code is publicly available for CVE-2025-2099, increasing the risk of exploitation.
How to fix CVE-2025-2099?
1. PATCH: Upgrade immediately — pip install --upgrade 'transformers>=4.50.0'. Verify with: pip show transformers | grep Version. 2. AUDIT: Identify any production service or pipeline importing transformers.testing_utils and prioritize those for emergency patching. 3. WORKAROUND (if patching is blocked): Enforce input length limits upstream — reject or truncate inputs exceeding 1,000 characters before they reach preprocess_string(). 4. DETECT: Alert on Python worker processes sustaining >80% CPU for >30 seconds on ML infrastructure. 5. DEPENDENCY SCAN: Run pip-audit or safety check against your transformers dependency tree across all environments (dev, staging, prod, CI).
What systems are affected by CVE-2025-2099?
This vulnerability affects the following AI/ML architecture patterns: training pipelines, CI/CD ML pipelines, shared ML platforms, model development environments.
What is the CVSS score for CVE-2025-2099?
CVE-2025-2099 has a CVSS v3.1 base score of 7.5 (HIGH). The EPSS exploitation probability is 0.09%.
Technical Details
NVD Description
A vulnerability in the `preprocess_string()` function of the `transformers.testing_utils` module in huggingface/transformers version v4.48.3 allows for a Regular Expression Denial of Service (ReDoS) attack. The regular expression used to process code blocks in docstrings contains nested quantifiers, leading to exponential backtracking when processing input with a large number of newline characters. An attacker can exploit this by providing a specially crafted payload, causing high CPU usage and potential application downtime, effectively resulting in a Denial of Service (DoS) scenario.
Exploitation Scenario
An attacker identifies a CI/CD pipeline or code documentation service that runs transformers test utilities against submitted code. They craft a pull request or API submission containing a docstring with a code block followed by 5,000+ consecutive newline characters. When preprocess_string() evaluates this input, the nested quantifiers in the internal regex trigger catastrophic backtracking—the process stalls at 100% CPU for minutes per request. By flooding the endpoint with concurrent malicious payloads, the attacker exhausts all available worker threads, taking the CI pipeline or ML service offline. No credentials, no prior access, no ML expertise required.
Weaknesses (CWE)
CVSS Vector
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H References
- github.com/advisories/GHSA-qq3j-4f4f-9583
- github.com/huggingface/transformers/pull/36648
- github.com/pypa/advisory-database/tree/main/vulns/transformers/PYSEC-2025-40.yaml
- nvd.nist.gov/vuln/detail/CVE-2025-2099
- github.com/huggingface/transformers/commit/8cb522b4190bd556ce51be04942720650b1a3e57 Patch
- huntr.com/bounties/97b780f3-ffca-424f-ad5d-0e1c57a5bde4 Exploit 3rd Party
- github.com/ARPSyndicate/cve-scores Exploit
- github.com/Kwaai-AI-Lab/OpenAI-Petal Exploit
- github.com/fkie-cad/nvd-json-data-feeds Exploit
Timeline
Related Vulnerabilities
CVE-2024-3568 9.6 HuggingFace Transformers: RCE via pickle deserialization
Same package: transformers CVE-2023-6730 8.8 HuggingFace Transformers: RCE via unsafe deserialization
Same package: transformers CVE-2024-11392 8.8 HuggingFace Transformers: RCE via config deserialization
Same package: transformers CVE-2024-11393 8.8 Transformers: RCE via MaskFormer model deserialization
Same package: transformers CVE-2024-11394 8.8 Transformers: RCE via Trax model deserialization
Same package: transformers
AI Threat Alert