CVE-2025-1194: transformers: ReDoS in GPT-NeoX Japanese tokenizer

GHSA-fpwr-67px-3qhx MEDIUM PoC AVAILABLE CISA: TRACK*
Published April 29, 2025
CISO Take

Upgrade HuggingFace Transformers to 4.50.0 immediately if your stack includes any Japanese NLP workloads. The ReDoS in SubWordJapaneseTokenizer can peg CPU to 100% via a single crafted input, taking down inference services or preprocessing pipelines. If you are not running Japanese language models, your exposure is zero — this is a narrow but real availability risk for those who are.

Risk Assessment

Actual risk is low-to-moderate despite CVSS 6.5. EPSS of 0.00078 signals no active exploitation. Attack vector is network but requires user interaction — a downstream user or API consumer must submit the malicious payload to a tokenizer-exposed endpoint. Impact is purely availability (DoS), with no data loss or confidentiality breach. Blast radius is limited to organizations running GPT-NeoX-Japanese models. The fix is available and straightforward (pip upgrade), making residual risk low for patched systems.

Affected Systems

Package Ecosystem Vulnerable Range Patched
transformers pip No patch
160.4K OpenSSF 4.9 7.9K dependents Pushed yesterday 39% patched ~101d to patch Full package profile →
transformers pip < 4.50.0 4.50.0
160.4K OpenSSF 4.9 7.9K dependents Pushed yesterday 39% patched ~101d to patch Full package profile →

Severity & Risk

CVSS 3.1
6.5 / 10
EPSS
0.1%
chance of exploitation in 30 days
Higher than 23% of all CVEs
Exploitation Status
Exploit Available
Exploitation: MEDIUM
Sophistication
Trivial
Exploitation Confidence
medium
CISA SSVC: Public PoC
Public PoC indexed (trickest/cve)
Composite signal derived from CISA KEV, CISA SSVC, EPSS, trickest/cve, and Nuclei templates.

Attack Surface

AV AC PR UI S C I A
AV Network
AC Low
PR None
UI Required
S Unchanged
C None
I None
A High

Recommended Action

5 steps
  1. PATCH

    Upgrade transformers to ≥4.50.0 (pip install --upgrade transformers). This is the only complete fix.

  2. WORKAROUND (if upgrade is blocked): Implement input length caps and character class validation before tokenization; reject inputs exceeding a safe threshold for Japanese text.

  3. DETECTION

    Monitor inference server CPU utilization for sustained spikes correlated with single requests; alert on requests exceeding 2-3x normal tokenization latency.

  4. CONTAINMENT

    If running multi-tenant inference, isolate Japanese tokenizer workloads to dedicated workers with CPU throttling (cgroups/ulimit) to prevent cross-tenant DoS.

  5. VALIDATION

    After patching, confirm version with pip show transformers | grep Version.

CISA SSVC Assessment

Decision Track*
Exploitation poc
Automatable No
Technical Impact partial

Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Article 9 - Risk management system
ISO 42001
6.1.2 - AI risk assessment 8.4 - AI system operation
NIST AI RMF
GOVERN-1.7 - Processes for AI risk and impact tracking MANAGE-2.2 - Mechanisms to sustain value of deployed AI systems
OWASP LLM Top 10
LLM04 - Model Denial of Service

Frequently Asked Questions

What is CVE-2025-1194?

Upgrade HuggingFace Transformers to 4.50.0 immediately if your stack includes any Japanese NLP workloads. The ReDoS in SubWordJapaneseTokenizer can peg CPU to 100% via a single crafted input, taking down inference services or preprocessing pipelines. If you are not running Japanese language models, your exposure is zero — this is a narrow but real availability risk for those who are.

Is CVE-2025-1194 actively exploited?

Proof-of-concept exploit code is publicly available for CVE-2025-1194, increasing the risk of exploitation.

How to fix CVE-2025-1194?

1. PATCH: Upgrade transformers to ≥4.50.0 (pip install --upgrade transformers). This is the only complete fix. 2. WORKAROUND (if upgrade is blocked): Implement input length caps and character class validation before tokenization; reject inputs exceeding a safe threshold for Japanese text. 3. DETECTION: Monitor inference server CPU utilization for sustained spikes correlated with single requests; alert on requests exceeding 2-3x normal tokenization latency. 4. CONTAINMENT: If running multi-tenant inference, isolate Japanese tokenizer workloads to dedicated workers with CPU throttling (cgroups/ulimit) to prevent cross-tenant DoS. 5. VALIDATION: After patching, confirm version with `pip show transformers | grep Version`.

What systems are affected by CVE-2025-1194?

This vulnerability affects the following AI/ML architecture patterns: model serving, training pipelines, NLP processing pipelines, batch inference.

What is the CVSS score for CVE-2025-1194?

CVE-2025-1194 has a CVSS v3.1 base score of 6.5 (MEDIUM). The EPSS exploitation probability is 0.08%.

Technical Details

NVD Description

A Regular Expression Denial of Service (ReDoS) vulnerability was identified in the huggingface/transformers library, specifically in the file `tokenization_gpt_neox_japanese.py` of the GPT-NeoX-Japanese model. The vulnerability occurs in the SubWordJapaneseTokenizer class, where regular expressions process specially crafted inputs. The issue stems from a regex exhibiting exponential complexity under certain conditions, leading to excessive backtracking. This can result in high CPU usage and potential application downtime, effectively creating a Denial of Service (DoS) scenario. The affected version is v4.48.1 (latest).

Exploitation Scenario

An adversary targeting a Japanese-language sentiment analysis or document processing SaaS API sends a POST request with a specially crafted string designed to trigger catastrophic backtracking in the SubWordJapaneseTokenizer regex engine. No authentication is required if the endpoint is public-facing. The regex processes the input, enters exponential backtracking, and the worker process consumes 100% CPU for an extended period. In a Kubernetes deployment, liveness probes time out and the pod restarts, creating a cycle exploitable to maintain denial of service with low-rate request flooding. The PoC is public via huntr.com, making this accessible to low-sophistication actors targeting Japanese NLP services.

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:N/I:N/A:H

Timeline

Published
April 29, 2025
Last Modified
August 4, 2025
First Seen
April 29, 2025

Related Vulnerabilities