CVE-2025-5197: Transformers: ReDoS in TF-to-PyTorch weight converter

GHSA-9356-575x-2w9m MEDIUM PoC AVAILABLE CISA: TRACK*
Published August 6, 2025
CISO Take

Hugging Face Transformers versions up to 4.51.3 contain a ReDoS in the TensorFlow-to-PyTorch model conversion function, exploitable by anyone who can supply crafted weight name strings to a conversion endpoint — no authentication required. If your MLOps pipeline or model serving API exposes TF→PT conversion to untrusted input, you are vulnerable to CPU exhaustion and service disruption. Patch immediately to transformers >= 4.53.0; until then, isolate conversion functions behind authentication or input validation.

Risk Assessment

Operational risk is low-to-medium. EPSS is near-zero (0.00035) and the CVE is not in CISA KEV, indicating no observed active exploitation. However, the CVSS attack vector is Network with no privileges or user interaction required, meaning any internet-exposed service invoking this function on user-supplied data is a viable target. The impact is limited to availability (A:L in CVSS), but in a high-throughput model-serving environment, repeated CPU spikes from concurrent ReDoS attacks could cascade into a full outage. The specific attack surface — TF-to-PyTorch weight name conversion — is niche but present in any organization migrating or serving multi-framework models.

Affected Systems

Package Ecosystem Vulnerable Range Patched
transformers pip No patch
160.2K OpenSSF 4.9 7.8K dependents Pushed 6d ago 39% patched ~101d to patch Full package profile →
transformers pip < 4.53.0 4.53.0
160.2K OpenSSF 4.9 7.8K dependents Pushed 6d ago 39% patched ~101d to patch Full package profile →

Severity & Risk

CVSS 3.1
5.3 / 10
EPSS
0.0%
chance of exploitation in 30 days
Higher than 10% of all CVEs
Exploitation Status
Exploit Available
Exploitation: MEDIUM
Sophistication
Trivial
Exploitation Confidence
medium
CISA SSVC: Public PoC
Public PoC indexed (trickest/cve)
Composite signal derived from CISA KEV, CISA SSVC, EPSS, trickest/cve, and Nuclei templates.

Attack Surface

AV AC PR UI S C I A
AV Network
AC Low
PR None
UI None
S Unchanged
C None
I None
A Low

Recommended Action

5 steps
  1. PATCH

    Upgrade transformers to >= 4.53.0 immediately on all environments (pip install --upgrade transformers).

  2. DETECT

    Audit CI/CD, training scripts, and serving code for calls to convert_tf_weight_name_to_pt_weight_name() or any from_pretrained() path that loads TensorFlow checkpoints.

  3. SHORT-TERM WORKAROUND: If patching is not immediately possible, gate TF-to-PyTorch conversion behind authentication and apply input length limits or regex sanitization on weight name strings before passing to the vulnerable function.

  4. MONITOR

    Alert on sustained CPU spikes in model-serving or conversion worker processes as a potential exploitation indicator.

  5. INVENTORY

    Identify all internal tools, notebooks, and APIs that use the transformers library and prioritize those accepting external model inputs.

CISA SSVC Assessment

Decision Track*
Exploitation poc
Automatable Yes
Technical Impact partial

Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Article 15 - Accuracy, robustness and cybersecurity
ISO 42001
A.9.7 - AI system availability and resilience
NIST AI RMF
MANAGE-2.2 - Mechanisms to sustain value of deployed AI are in place
OWASP LLM Top 10
LLM04:2023 - Model Denial of Service LLM05:2023 - Supply Chain Vulnerabilities

Frequently Asked Questions

What is CVE-2025-5197?

Hugging Face Transformers versions up to 4.51.3 contain a ReDoS in the TensorFlow-to-PyTorch model conversion function, exploitable by anyone who can supply crafted weight name strings to a conversion endpoint — no authentication required. If your MLOps pipeline or model serving API exposes TF→PT conversion to untrusted input, you are vulnerable to CPU exhaustion and service disruption. Patch immediately to transformers >= 4.53.0; until then, isolate conversion functions behind authentication or input validation.

Is CVE-2025-5197 actively exploited?

Proof-of-concept exploit code is publicly available for CVE-2025-5197, increasing the risk of exploitation.

How to fix CVE-2025-5197?

1. PATCH: Upgrade transformers to >= 4.53.0 immediately on all environments (pip install --upgrade transformers). 2. DETECT: Audit CI/CD, training scripts, and serving code for calls to `convert_tf_weight_name_to_pt_weight_name()` or any `from_pretrained()` path that loads TensorFlow checkpoints. 3. SHORT-TERM WORKAROUND: If patching is not immediately possible, gate TF-to-PyTorch conversion behind authentication and apply input length limits or regex sanitization on weight name strings before passing to the vulnerable function. 4. MONITOR: Alert on sustained CPU spikes in model-serving or conversion worker processes as a potential exploitation indicator. 5. INVENTORY: Identify all internal tools, notebooks, and APIs that use the transformers library and prioritize those accepting external model inputs.

What systems are affected by CVE-2025-5197?

This vulnerability affects the following AI/ML architecture patterns: model serving, training pipelines, MLOps pipelines, model registries.

What is the CVSS score for CVE-2025-5197?

CVE-2025-5197 has a CVSS v3.1 base score of 5.3 (MEDIUM). The EPSS exploitation probability is 0.03%.

Technical Details

NVD Description

A Regular Expression Denial of Service (ReDoS) vulnerability exists in the Hugging Face Transformers library, specifically in the `convert_tf_weight_name_to_pt_weight_name()` function. This function, responsible for converting TensorFlow weight names to PyTorch format, uses a regex pattern `/[^/]*___([^/]*)/` that can be exploited to cause excessive CPU consumption through crafted input strings due to catastrophic backtracking. The vulnerability affects versions up to 4.51.3 and is fixed in version 4.53.0. This issue can lead to service disruption, resource exhaustion, and potential API service vulnerabilities, impacting model conversion processes between TensorFlow and PyTorch formats.

Exploitation Scenario

An adversary identifies a public or lightly-authenticated model-serving API that accepts TensorFlow checkpoint uploads and internally calls `convert_tf_weight_name_to_pt_weight_name()` during model loading. The attacker crafts a malicious checkpoint with a weight name such as `aaa___aaa___aaa___...` (thousands of characters designed to trigger catastrophic backtracking in the `/[^/]*___([^/]*)/` regex). The attacker submits concurrent requests with these payloads. Each request causes the conversion worker to spike to 100% CPU for an extended period. With sufficient concurrent requests, the service becomes unresponsive — disrupting model inference for legitimate users. In a pay-per-use or metered environment, this also drives up compute costs for the victim.

CVSS Vector

CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:L

Timeline

Published
August 6, 2025
Last Modified
October 21, 2025
First Seen
August 6, 2025

Related Vulnerabilities