CVE-2025-55557: PyTorch: DoS via cummin+Inductor NameError in 2.7.0
HIGH PoC AVAILABLE CISA: TRACK*PyTorch 2.7.0 crashes with an uncaught NameError when any model using torch.cummin is compiled through the Inductor backend—no authentication or privileges required to trigger it. Any production ML serving endpoint accepting compiled models, or any training pipeline using torch.compile on affected architectures, is exposed to remote availability disruption. Immediate action: pin to a patched build (PR #151931) or disable Inductor compilation for models containing torch.cummin operations.
Risk Assessment
High exploitability: CVSS 7.5, network-accessible, zero authentication, low complexity—attacker only needs to influence a model definition or submit a crafted model to a serving endpoint. Impact is purely availability (A:H), no confidentiality or integrity risk. Blast radius is bounded to PyTorch 2.7.0 with Inductor-compiled models, but that version was just released and many teams will be on it. Shared ML platforms, MLaaS APIs, and CI/CD pipelines running automated torch.compile are the highest-risk environments.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| pytorch | pip | — | No patch |
Do you use pytorch? You're affected.
Severity & Risk
Attack Surface
Recommended Action
5 steps-
PATCH
Apply PR #151931 once merged into a stable PyTorch release; monitor https://github.com/pytorch/pytorch/pull/151931 for merge status.
-
WORKAROUND
Avoid torch.compile() (Inductor backend) on models that include torch.cummin; use eager mode as fallback.
-
DETECTION
Monitor serving infrastructure for unexpected Python NameError crashes in PyTorch processes; alert on abnormal process termination in ML serving pods.
-
ISOLATION
If running a shared ML platform accepting user-submitted models, sandbox torch.compile execution in isolated processes with resource limits and restart policies.
-
VERSION CONTROL
Audit all environments for PyTorch 2.7.0 and prioritize patching of externally accessible inference endpoints.
CISA SSVC Assessment
Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2025-55557?
PyTorch 2.7.0 crashes with an uncaught NameError when any model using torch.cummin is compiled through the Inductor backend—no authentication or privileges required to trigger it. Any production ML serving endpoint accepting compiled models, or any training pipeline using torch.compile on affected architectures, is exposed to remote availability disruption. Immediate action: pin to a patched build (PR #151931) or disable Inductor compilation for models containing torch.cummin operations.
Is CVE-2025-55557 actively exploited?
Proof-of-concept exploit code is publicly available for CVE-2025-55557, increasing the risk of exploitation.
How to fix CVE-2025-55557?
1. PATCH: Apply PR #151931 once merged into a stable PyTorch release; monitor https://github.com/pytorch/pytorch/pull/151931 for merge status. 2. WORKAROUND: Avoid torch.compile() (Inductor backend) on models that include torch.cummin; use eager mode as fallback. 3. DETECTION: Monitor serving infrastructure for unexpected Python NameError crashes in PyTorch processes; alert on abnormal process termination in ML serving pods. 4. ISOLATION: If running a shared ML platform accepting user-submitted models, sandbox torch.compile execution in isolated processes with resource limits and restart policies. 5. VERSION CONTROL: Audit all environments for PyTorch 2.7.0 and prioritize patching of externally accessible inference endpoints.
What systems are affected by CVE-2025-55557?
This vulnerability affects the following AI/ML architecture patterns: model serving, training pipelines, ml compilation pipelines, shared ML platforms.
What is the CVSS score for CVE-2025-55557?
CVE-2025-55557 has a CVSS v3.1 base score of 7.5 (HIGH). The EPSS exploitation probability is 0.05%.
Technical Details
NVD Description
A Name Error occurs in pytorch v2.7.0 when a PyTorch model consists of torch.cummin and is compiled by Inductor, leading to a Denial of Service (DoS).
Exploitation Scenario
An adversary targeting a public ML inference API (e.g., a model evaluation service or shared training platform) submits a PyTorch model that includes a torch.cummin operation. When the backend attempts to optimize the model using torch.compile() with Inductor, a NameError is raised in the generated code, crashing the worker process. In a Kubernetes-based serving environment without proper process isolation, repeated submissions could continuously crash and restart inference workers, causing sustained service degradation. In a CI/CD pipeline accepting external PRs, a contributor could embed torch.cummin in a model test to crash the pipeline runner.
Weaknesses (CWE)
CVSS Vector
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H References
Timeline
Related Vulnerabilities
CVE-2024-5452 9.8 pytorch-lightning: RCE via deepdiff Delta deserialization
Same package: torch CVE-2023-43654 9.8 TorchServe: SSRF + RCE via unrestricted model URL loading
Same package: torch CVE-2022-45907 9.8 PyTorch: RCE via unsafe eval in JIT annotations
Same package: torch CVE-2022-0845 9.8 pytorch-lightning: code injection enables full RCE
Same package: torch CVE-2024-35198 9.8 TorchServe: URL bypass enables arbitrary model loading
Same package: torch
AI Threat Alert