CVE-2025-3264: Transformers: ReDoS in dynamic module loader causes DoS

GHSA-jjph-296x-mrcr MEDIUM PoC AVAILABLE CISA: TRACK*
Published July 7, 2025
CISO Take

Upgrade Hugging Face Transformers to 4.51.0 immediately if your pipelines load models from the Hub or any external source. The vulnerable `get_imports()` function processes code embedded in model files, meaning a crafted model published by a threat actor can freeze your model serving infrastructure via CPU exhaustion. Risk is highest in automated pipelines with no model provenance validation.

Risk Assessment

Low immediate exploitation probability (EPSS 0.0004, not in KEV), but the attack surface is deceptively broad. Any org with automated model loading from HuggingFace Hub or internal registries using Transformers < 4.51.0 is exposed. The supply chain angle elevates risk above the CVSS 5.3 face value: a single malicious model publication can trigger DoS across all downstream consumers. Cloud-hosted inference endpoints face the highest operational impact.

Affected Systems

Package Ecosystem Vulnerable Range Patched
transformers pip No patch
160.4K OpenSSF 4.9 7.8K dependents Pushed yesterday 39% patched ~101d to patch Full package profile →
transformers pip < 4.51.0 4.51.0
160.4K OpenSSF 4.9 7.8K dependents Pushed yesterday 39% patched ~101d to patch Full package profile →

Severity & Risk

CVSS 3.1
5.3 / 10
EPSS
0.1%
chance of exploitation in 30 days
Higher than 26% of all CVEs
Exploitation Status
Exploit Available
Exploitation: MEDIUM
Sophistication
Moderate
Exploitation Confidence
medium
CISA SSVC: Public PoC
Public PoC indexed (trickest/cve)
Composite signal derived from CISA KEV, CISA SSVC, EPSS, trickest/cve, and Nuclei templates.

Attack Surface

AV AC PR UI S C I A
AV Network
AC Low
PR None
UI None
S Unchanged
C None
I None
A Low

Recommended Action

5 steps
  1. Patch

    Upgrade transformers to >= 4.51.0 across all environments (dev, staging, prod, CI/CD). Run pip install --upgrade transformers and pin to patched version in requirements files.

  2. Verify

    Audit pip freeze | grep transformers on all model-serving hosts and training workers.

  3. Model provenance

    Implement allowlisting for trusted model sources; block loading from arbitrary Hub namespaces in production.

  4. Detection

    Alert on sustained high CPU usage (>90% for >30s) in model loading phases — this is the primary signal.

  5. Workaround (if patching is delayed): Wrap from_pretrained() calls in subprocess with CPU time limits using resource.setrlimit(RESOURCE_RLIMIT_CPU) or equivalent.

CISA SSVC Assessment

Decision Track*
Exploitation poc
Automatable Yes
Technical Impact partial

Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Article 15 - Accuracy, Robustness and Cybersecurity
ISO 42001
A.9.2 - AI System Lifecycle — Dependency and Third-Party Component Management
NIST AI RMF
MANAGE 2.2 - Risk Treatment — Mitigation of Identified AI Risks
OWASP LLM Top 10
LLM03:2025 - Supply Chain Vulnerabilities LLM10:2025 - Unbounded Consumption

Frequently Asked Questions

What is CVE-2025-3264?

Upgrade Hugging Face Transformers to 4.51.0 immediately if your pipelines load models from the Hub or any external source. The vulnerable `get_imports()` function processes code embedded in model files, meaning a crafted model published by a threat actor can freeze your model serving infrastructure via CPU exhaustion. Risk is highest in automated pipelines with no model provenance validation.

Is CVE-2025-3264 actively exploited?

Proof-of-concept exploit code is publicly available for CVE-2025-3264, increasing the risk of exploitation.

How to fix CVE-2025-3264?

1. **Patch**: Upgrade `transformers` to >= 4.51.0 across all environments (dev, staging, prod, CI/CD). Run `pip install --upgrade transformers` and pin to patched version in requirements files. 2. **Verify**: Audit `pip freeze | grep transformers` on all model-serving hosts and training workers. 3. **Model provenance**: Implement allowlisting for trusted model sources; block loading from arbitrary Hub namespaces in production. 4. **Detection**: Alert on sustained high CPU usage (>90% for >30s) in model loading phases — this is the primary signal. 5. **Workaround** (if patching is delayed): Wrap `from_pretrained()` calls in subprocess with CPU time limits using `resource.setrlimit(RESOURCE_RLIMIT_CPU)` or equivalent.

What systems are affected by CVE-2025-3264?

This vulnerability affects the following AI/ML architecture patterns: model serving, training pipelines, MLOps CI/CD pipelines, agent frameworks, fine-tuning workflows.

What is the CVSS score for CVE-2025-3264?

CVE-2025-3264 has a CVSS v3.1 base score of 5.3 (MEDIUM). The EPSS exploitation probability is 0.10%.

Technical Details

NVD Description

A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically in the `get_imports()` function within `dynamic_module_utils.py`. This vulnerability affects versions 4.49.0 and is fixed in version 4.51.0. The issue arises from a regular expression pattern `\s*try\s*:.*?except.*?:` used to filter out try/except blocks from Python code, which can be exploited to cause excessive CPU consumption through crafted input strings due to catastrophic backtracking. This vulnerability can lead to remote code loading disruption, resource exhaustion in model serving, supply chain attack vectors, and development pipeline disruption.

Exploitation Scenario

An adversary creates a HuggingFace Hub account and publishes a model repository containing a Python file (e.g., `modeling_custom.py`) with a pathologically crafted try/except block designed to trigger catastrophic backtracking in the regex `\s*try\s*:.*?except.*?:`. When an organization's automated ML pipeline runs `AutoModel.from_pretrained('attacker/malicious-model')`, the `get_imports()` function processes the crafted file and the regex engine enters exponential backtracking. The model loading process hangs, consuming 100% CPU indefinitely. If the pipeline runs in a containerized serving environment, all serving replicas loading this model are DoS'd simultaneously — taking down inference endpoints without any network-level attack.

CVSS Vector

CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:L

Timeline

Published
July 7, 2025
Last Modified
August 7, 2025
First Seen
July 7, 2025

Related Vulnerabilities