CVE-2025-5197: Transformers: ReDoS in TF-to-PyTorch weight converter
GHSA-9356-575x-2w9m MEDIUM PoC AVAILABLE CISA: TRACK*Hugging Face Transformers versions up to 4.51.3 contain a ReDoS in the TensorFlow-to-PyTorch model conversion function, exploitable by anyone who can supply crafted weight name strings to a conversion endpoint — no authentication required. If your MLOps pipeline or model serving API exposes TF→PT conversion to untrusted input, you are vulnerable to CPU exhaustion and service disruption. Patch immediately to transformers >= 4.53.0; until then, isolate conversion functions behind authentication or input validation.
Risk Assessment
Operational risk is low-to-medium. EPSS is near-zero (0.00035) and the CVE is not in CISA KEV, indicating no observed active exploitation. However, the CVSS attack vector is Network with no privileges or user interaction required, meaning any internet-exposed service invoking this function on user-supplied data is a viable target. The impact is limited to availability (A:L in CVSS), but in a high-throughput model-serving environment, repeated CPU spikes from concurrent ReDoS attacks could cascade into a full outage. The specific attack surface — TF-to-PyTorch weight name conversion — is niche but present in any organization migrating or serving multi-framework models.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| transformers | pip | — | No patch |
| transformers | pip | < 4.53.0 | 4.53.0 |
Severity & Risk
Attack Surface
Recommended Action
5 steps-
PATCH
Upgrade transformers to >= 4.53.0 immediately on all environments (pip install --upgrade transformers).
-
DETECT
Audit CI/CD, training scripts, and serving code for calls to
convert_tf_weight_name_to_pt_weight_name()or anyfrom_pretrained()path that loads TensorFlow checkpoints. -
SHORT-TERM WORKAROUND: If patching is not immediately possible, gate TF-to-PyTorch conversion behind authentication and apply input length limits or regex sanitization on weight name strings before passing to the vulnerable function.
-
MONITOR
Alert on sustained CPU spikes in model-serving or conversion worker processes as a potential exploitation indicator.
-
INVENTORY
Identify all internal tools, notebooks, and APIs that use the transformers library and prioritize those accepting external model inputs.
CISA SSVC Assessment
Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2025-5197?
Hugging Face Transformers versions up to 4.51.3 contain a ReDoS in the TensorFlow-to-PyTorch model conversion function, exploitable by anyone who can supply crafted weight name strings to a conversion endpoint — no authentication required. If your MLOps pipeline or model serving API exposes TF→PT conversion to untrusted input, you are vulnerable to CPU exhaustion and service disruption. Patch immediately to transformers >= 4.53.0; until then, isolate conversion functions behind authentication or input validation.
Is CVE-2025-5197 actively exploited?
Proof-of-concept exploit code is publicly available for CVE-2025-5197, increasing the risk of exploitation.
How to fix CVE-2025-5197?
1. PATCH: Upgrade transformers to >= 4.53.0 immediately on all environments (pip install --upgrade transformers). 2. DETECT: Audit CI/CD, training scripts, and serving code for calls to `convert_tf_weight_name_to_pt_weight_name()` or any `from_pretrained()` path that loads TensorFlow checkpoints. 3. SHORT-TERM WORKAROUND: If patching is not immediately possible, gate TF-to-PyTorch conversion behind authentication and apply input length limits or regex sanitization on weight name strings before passing to the vulnerable function. 4. MONITOR: Alert on sustained CPU spikes in model-serving or conversion worker processes as a potential exploitation indicator. 5. INVENTORY: Identify all internal tools, notebooks, and APIs that use the transformers library and prioritize those accepting external model inputs.
What systems are affected by CVE-2025-5197?
This vulnerability affects the following AI/ML architecture patterns: model serving, training pipelines, MLOps pipelines, model registries.
What is the CVSS score for CVE-2025-5197?
CVE-2025-5197 has a CVSS v3.1 base score of 5.3 (MEDIUM). The EPSS exploitation probability is 0.03%.
Technical Details
NVD Description
A Regular Expression Denial of Service (ReDoS) vulnerability exists in the Hugging Face Transformers library, specifically in the `convert_tf_weight_name_to_pt_weight_name()` function. This function, responsible for converting TensorFlow weight names to PyTorch format, uses a regex pattern `/[^/]*___([^/]*)/` that can be exploited to cause excessive CPU consumption through crafted input strings due to catastrophic backtracking. The vulnerability affects versions up to 4.51.3 and is fixed in version 4.53.0. This issue can lead to service disruption, resource exhaustion, and potential API service vulnerabilities, impacting model conversion processes between TensorFlow and PyTorch formats.
Exploitation Scenario
An adversary identifies a public or lightly-authenticated model-serving API that accepts TensorFlow checkpoint uploads and internally calls `convert_tf_weight_name_to_pt_weight_name()` during model loading. The attacker crafts a malicious checkpoint with a weight name such as `aaa___aaa___aaa___...` (thousands of characters designed to trigger catastrophic backtracking in the `/[^/]*___([^/]*)/` regex). The attacker submits concurrent requests with these payloads. Each request causes the conversion worker to spike to 100% CPU for an extended period. With sufficient concurrent requests, the service becomes unresponsive — disrupting model inference for legitimate users. In a pay-per-use or metered environment, this also drives up compute costs for the victim.
Weaknesses (CWE)
CVSS Vector
CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:L References
- github.com/advisories/GHSA-9356-575x-2w9m
- github.com/huggingface/transformers/commit/701caef704e356dc2f9331cc3fd5df0eccb4720a
- nvd.nist.gov/vuln/detail/CVE-2025-5197
- github.com/huggingface/transformers/commit/944b56000be5e9b61af8301aa340838770ad8a0b Patch
- huntr.com/bounties/3f8b3fd0-166b-46e7-b60f-60dd9d2678bf Exploit Issue Patch 3rd Party
Timeline
Related Vulnerabilities
CVE-2024-3568 9.6 HuggingFace Transformers: RCE via pickle deserialization
Same package: transformers CVE-2023-6730 8.8 HuggingFace Transformers: RCE via unsafe deserialization
Same package: transformers CVE-2024-11393 8.8 Transformers: RCE via MaskFormer model deserialization
Same package: transformers CVE-2024-11392 8.8 HuggingFace Transformers: RCE via config deserialization
Same package: transformers CVE-2024-11394 8.8 Transformers: RCE via Trax model deserialization
Same package: transformers
AI Threat Alert