CVE-2025-2999: PyTorch: memory corruption in RNN sequence unpacking
MEDIUM PoC AVAILABLEA local attacker with low privileges on shared ML compute can trigger memory corruption in PyTorch 2.6.0's RNN sequence handling, risking training data or credential exposure from process memory. Multi-tenant GPU clusters, shared Jupyter environments, and CI/CD pipelines running RNN-based workloads carry the highest exposure. Upgrade away from PyTorch 2.6.0 once a patch lands (track issue #149622) and enforce workload isolation in the interim.
Risk Assessment
Moderate organizational risk despite a CVSS 5.3 score. The local attack vector (AV:L) substantially limits exposure in isolated single-user environments, but multi-tenant ML infrastructure—shared Jupyter hubs, HPC clusters, SageMaker Studio shared kernels—eliminates this barrier entirely. CWE-119 memory corruption primitives occasionally yield primitives beyond initial CVSS scoring by skilled attackers. No active exploitation or KEV listing reduces urgency, but PyTorch's prevalence across virtually every AI/ML stack makes patch lag a meaningful supply chain risk.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| pytorch | pip | — | No patch |
Do you use pytorch? You're affected.
Severity & Risk
Attack Surface
Recommended Action
6 steps-
Pin PyTorch below 2.6.0 or freeze at an unaffected version until an official patch releases; monitor pytorch/pytorch issue #149622 for status.
-
Isolate training workloads per user using separate containers or VMs in any multi-tenant compute environment.
-
Apply least-privilege: revoke unnecessary local shell access to ML training nodes.
-
Audit codebases for
torch.nn.utils.rnn.unpack_sequenceandpack_padded_sequencecall sites. -
Monitor PyTorch training processes for unexpected crashes or segmentation faults as indicators of exploitation attempts.
-
Add PyTorch version pinning and vulnerability scanning to your ML dependency pipeline (pip-audit, Safety, Dependabot).
CISA SSVC Assessment
Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2025-2999?
A local attacker with low privileges on shared ML compute can trigger memory corruption in PyTorch 2.6.0's RNN sequence handling, risking training data or credential exposure from process memory. Multi-tenant GPU clusters, shared Jupyter environments, and CI/CD pipelines running RNN-based workloads carry the highest exposure. Upgrade away from PyTorch 2.6.0 once a patch lands (track issue #149622) and enforce workload isolation in the interim.
Is CVE-2025-2999 actively exploited?
Proof-of-concept exploit code is publicly available for CVE-2025-2999, increasing the risk of exploitation.
How to fix CVE-2025-2999?
1. Pin PyTorch below 2.6.0 or freeze at an unaffected version until an official patch releases; monitor pytorch/pytorch issue #149622 for status. 2. Isolate training workloads per user using separate containers or VMs in any multi-tenant compute environment. 3. Apply least-privilege: revoke unnecessary local shell access to ML training nodes. 4. Audit codebases for `torch.nn.utils.rnn.unpack_sequence` and `pack_padded_sequence` call sites. 5. Monitor PyTorch training processes for unexpected crashes or segmentation faults as indicators of exploitation attempts. 6. Add PyTorch version pinning and vulnerability scanning to your ML dependency pipeline (pip-audit, Safety, Dependabot).
What systems are affected by CVE-2025-2999?
This vulnerability affects the following AI/ML architecture patterns: training pipelines, sequence-to-sequence models, multi-tenant ML platforms, NLP inference pipelines, time-series forecasting pipelines.
What is the CVSS score for CVE-2025-2999?
CVE-2025-2999 has a CVSS v3.1 base score of 5.3 (MEDIUM). The EPSS exploitation probability is 0.09%.
Technical Details
NVD Description
A vulnerability was found in PyTorch 2.6.0. It has been rated as critical. Affected by this issue is the function torch.nn.utils.rnn.unpack_sequence. The manipulation leads to memory corruption. Attacking locally is a requirement. The exploit has been disclosed to the public and may be used.
Exploitation Scenario
A rogue insider or attacker who has compromised an ML engineer account on a shared HPC training cluster crafts a malformed packed sequence tensor and feeds it to a training script that calls `torch.nn.utils.rnn.unpack_sequence`. The memory corruption corrupts heap adjacent to active tensor allocations, potentially leaking portions of other tenants' training data batches or authentication tokens loaded into the same process. In a shared Jupyter environment, this could expose proprietary training datasets across user boundaries. A more advanced attacker attempts to convert the write primitive into arbitrary code execution within the training node, pivoting to exfiltrate model weights or inject poisoned gradients.
Weaknesses (CWE)
CVSS Vector
CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:L/I:L/A:L References
- github.com/pytorch/pytorch/issues/149622 Issue Vendor
- github.com/pytorch/pytorch/issues/149622 Issue Vendor
- vuldb.com Permissions Required VDB
- vuldb.com 3rd Party VDB
- vuldb.com 3rd Party VDB
- github.com/fkie-cad/nvd-json-data-feeds Exploit
Timeline
Related Vulnerabilities
CVE-2024-5452 9.8 pytorch-lightning: RCE via deepdiff Delta deserialization
Same package: torch CVE-2023-43654 9.8 TorchServe: SSRF + RCE via unrestricted model URL loading
Same package: torch CVE-2022-45907 9.8 PyTorch: RCE via unsafe eval in JIT annotations
Same package: torch CVE-2022-0845 9.8 pytorch-lightning: code injection enables full RCE
Same package: torch CVE-2024-35198 9.8 TorchServe: URL bypass enables arbitrary model loading
Same package: torch
AI Threat Alert