CVE-2025-3264: Transformers: ReDoS in dynamic module loader causes DoS
GHSA-jjph-296x-mrcr MEDIUM PoC AVAILABLE CISA: TRACK*Upgrade Hugging Face Transformers to 4.51.0 immediately if your pipelines load models from the Hub or any external source. The vulnerable `get_imports()` function processes code embedded in model files, meaning a crafted model published by a threat actor can freeze your model serving infrastructure via CPU exhaustion. Risk is highest in automated pipelines with no model provenance validation.
Risk Assessment
Low immediate exploitation probability (EPSS 0.0004, not in KEV), but the attack surface is deceptively broad. Any org with automated model loading from HuggingFace Hub or internal registries using Transformers < 4.51.0 is exposed. The supply chain angle elevates risk above the CVSS 5.3 face value: a single malicious model publication can trigger DoS across all downstream consumers. Cloud-hosted inference endpoints face the highest operational impact.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| transformers | pip | — | No patch |
| 160.4K
OpenSSF 4.9 7.8K dependents
Pushed yesterday 39% patched
~101d to patch
Full package profile →
| |||
| transformers | pip | < 4.51.0 | 4.51.0 |
| 160.4K
OpenSSF 4.9 7.8K dependents
Pushed yesterday 39% patched
~101d to patch
Full package profile →
| |||
Severity & Risk
Attack Surface
Recommended Action
5 steps-
Patch
Upgrade
transformersto >= 4.51.0 across all environments (dev, staging, prod, CI/CD). Runpip install --upgrade transformersand pin to patched version in requirements files. -
Verify
Audit
pip freeze | grep transformerson all model-serving hosts and training workers. -
Model provenance
Implement allowlisting for trusted model sources; block loading from arbitrary Hub namespaces in production.
-
Detection
Alert on sustained high CPU usage (>90% for >30s) in model loading phases — this is the primary signal.
-
Workaround (if patching is delayed): Wrap
from_pretrained()calls in subprocess with CPU time limits usingresource.setrlimit(RESOURCE_RLIMIT_CPU)or equivalent.
CISA SSVC Assessment
Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2025-3264?
Upgrade Hugging Face Transformers to 4.51.0 immediately if your pipelines load models from the Hub or any external source. The vulnerable `get_imports()` function processes code embedded in model files, meaning a crafted model published by a threat actor can freeze your model serving infrastructure via CPU exhaustion. Risk is highest in automated pipelines with no model provenance validation.
Is CVE-2025-3264 actively exploited?
Proof-of-concept exploit code is publicly available for CVE-2025-3264, increasing the risk of exploitation.
How to fix CVE-2025-3264?
1. **Patch**: Upgrade `transformers` to >= 4.51.0 across all environments (dev, staging, prod, CI/CD). Run `pip install --upgrade transformers` and pin to patched version in requirements files. 2. **Verify**: Audit `pip freeze | grep transformers` on all model-serving hosts and training workers. 3. **Model provenance**: Implement allowlisting for trusted model sources; block loading from arbitrary Hub namespaces in production. 4. **Detection**: Alert on sustained high CPU usage (>90% for >30s) in model loading phases — this is the primary signal. 5. **Workaround** (if patching is delayed): Wrap `from_pretrained()` calls in subprocess with CPU time limits using `resource.setrlimit(RESOURCE_RLIMIT_CPU)` or equivalent.
What systems are affected by CVE-2025-3264?
This vulnerability affects the following AI/ML architecture patterns: model serving, training pipelines, MLOps CI/CD pipelines, agent frameworks, fine-tuning workflows.
What is the CVSS score for CVE-2025-3264?
CVE-2025-3264 has a CVSS v3.1 base score of 5.3 (MEDIUM). The EPSS exploitation probability is 0.10%.
Technical Details
NVD Description
A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically in the `get_imports()` function within `dynamic_module_utils.py`. This vulnerability affects versions 4.49.0 and is fixed in version 4.51.0. The issue arises from a regular expression pattern `\s*try\s*:.*?except.*?:` used to filter out try/except blocks from Python code, which can be exploited to cause excessive CPU consumption through crafted input strings due to catastrophic backtracking. This vulnerability can lead to remote code loading disruption, resource exhaustion in model serving, supply chain attack vectors, and development pipeline disruption.
Exploitation Scenario
An adversary creates a HuggingFace Hub account and publishes a model repository containing a Python file (e.g., `modeling_custom.py`) with a pathologically crafted try/except block designed to trigger catastrophic backtracking in the regex `\s*try\s*:.*?except.*?:`. When an organization's automated ML pipeline runs `AutoModel.from_pretrained('attacker/malicious-model')`, the `get_imports()` function processes the crafted file and the regex engine enters exponential backtracking. The model loading process hangs, consuming 100% CPU indefinitely. If the pipeline runs in a containerized serving environment, all serving replicas loading this model are DoS'd simultaneously — taking down inference endpoints without any network-level attack.
Weaknesses (CWE)
CVSS Vector
CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:L References
- github.com/advisories/GHSA-jjph-296x-mrcr
- github.com/huggingface/transformers/commit/126abe3461762e5fc180e7e614391d1b4ab051ca
- nvd.nist.gov/vuln/detail/CVE-2025-3264
- github.com/huggingface/transformers/commit/0720e206c6ba28887e4d60ef60a6a089f6c1cc76 Patch
- huntr.com/bounties/3c6f7822-9992-476d-8cf0-b0b1623427df Exploit 3rd Party
Timeline
Related Vulnerabilities
CVE-2024-3568 9.6 HuggingFace Transformers: RCE via pickle deserialization
Same package: transformers CVE-2024-11393 8.8 Transformers: RCE via MaskFormer model deserialization
Same package: transformers CVE-2023-6730 8.8 HuggingFace Transformers: RCE via unsafe deserialization
Same package: transformers CVE-2024-11392 8.8 HuggingFace Transformers: RCE via config deserialization
Same package: transformers CVE-2024-11394 8.8 Transformers: RCE via Trax model deserialization
Same package: transformers
AI Threat Alert