CVE-2025-2998: PyTorch: memory corruption in RNN pad_packed_sequence
MEDIUM PoC AVAILABLE CISA: TRACK*A memory corruption bug in PyTorch 2.6.0's pad_packed_sequence function is exploitable by any local user with low privileges — relevant in shared ML compute environments (JupyterHub, Kubeflow, shared training clusters). No patch version is identified yet; restrict local access to ML training infrastructure and monitor for anomalous process behavior on GPU/training nodes. Exploit code is public, raising urgency despite the local-only attack vector.
Risk Assessment
CVSS 5.3 (Medium) understates operational risk in AI/ML environments where shared compute is the norm. Multi-tenant training clusters, shared Jupyter environments, and CI/CD ML pipelines routinely grant multiple users local execution access — precisely the attack surface this vulnerability targets. Memory corruption (CWE-119) with low complexity and no user interaction means a privileged insider or compromised ML engineer account can reliably exploit this. The public disclosure of the exploit elevates priority. No CISA KEV listing and no confirmed active exploitation keep this below critical, but shared ML infra teams should treat this as high-priority patching.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| pytorch | pip | — | No patch |
Do you use pytorch? You're affected.
Severity & Risk
Attack Surface
Recommended Action
6 steps-
Pin PyTorch to a version prior to 2.6.0 or await an official patch — monitor pytorch/pytorch#149622 for fix status.
-
Restrict local access to training infrastructure; enforce least-privilege on shared ML compute nodes.
-
Isolate training workloads in containers with no shared memory namespaces between users.
-
Audit JupyterHub and Kubeflow deployments for multi-tenant exposure — ensure namespace isolation.
-
Detection: monitor for anomalous memory usage patterns or segfaults in PyTorch worker processes, which may indicate exploitation attempts.
-
If PyTorch 2.6.0 is required, disable use of pad_packed_sequence and replace with equivalent manual padding logic as a temporary workaround.
CISA SSVC Assessment
Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2025-2998?
A memory corruption bug in PyTorch 2.6.0's pad_packed_sequence function is exploitable by any local user with low privileges — relevant in shared ML compute environments (JupyterHub, Kubeflow, shared training clusters). No patch version is identified yet; restrict local access to ML training infrastructure and monitor for anomalous process behavior on GPU/training nodes. Exploit code is public, raising urgency despite the local-only attack vector.
Is CVE-2025-2998 actively exploited?
Proof-of-concept exploit code is publicly available for CVE-2025-2998, increasing the risk of exploitation.
How to fix CVE-2025-2998?
1. Pin PyTorch to a version prior to 2.6.0 or await an official patch — monitor pytorch/pytorch#149622 for fix status. 2. Restrict local access to training infrastructure; enforce least-privilege on shared ML compute nodes. 3. Isolate training workloads in containers with no shared memory namespaces between users. 4. Audit JupyterHub and Kubeflow deployments for multi-tenant exposure — ensure namespace isolation. 5. Detection: monitor for anomalous memory usage patterns or segfaults in PyTorch worker processes, which may indicate exploitation attempts. 6. If PyTorch 2.6.0 is required, disable use of pad_packed_sequence and replace with equivalent manual padding logic as a temporary workaround.
What systems are affected by CVE-2025-2998?
This vulnerability affects the following AI/ML architecture patterns: training pipelines, shared ML compute clusters, model serving (RNN-based), MLOps platforms (Kubeflow, Ray, MLflow), NLP inference pipelines.
What is the CVSS score for CVE-2025-2998?
CVE-2025-2998 has a CVSS v3.1 base score of 5.3 (MEDIUM). The EPSS exploitation probability is 0.16%.
Technical Details
NVD Description
A vulnerability was found in PyTorch 2.6.0. It has been declared as critical. Affected by this vulnerability is the function torch.nn.utils.rnn.pad_packed_sequence. The manipulation leads to memory corruption. Local access is required to approach this attack. The exploit has been disclosed to the public and may be used.
Exploitation Scenario
An adversary with a low-privilege account on a shared GPU training cluster (common in academic or enterprise ML environments) submits a crafted training job that calls pad_packed_sequence with a malformed packed sequence tensor. The memory corruption primitive allows the adversary to overwrite adjacent memory regions — potentially corrupting another user's model weights mid-training, leaking gradient data from a co-located training process, or achieving code execution in the context of the PyTorch worker. In a Kubeflow or Ray cluster where multiple training jobs share node memory, this could enable cross-tenant data leakage or denial of service against other ML workloads.
Weaknesses (CWE)
CVSS Vector
CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:L/I:L/A:L References
- github.com/pytorch/pytorch/issues/149622 Issue Vendor
- github.com/pytorch/pytorch/issues/149622 Issue Vendor
- vuldb.com Permissions Required VDB
- vuldb.com 3rd Party VDB
- vuldb.com 3rd Party VDB
- github.com/fkie-cad/nvd-json-data-feeds Exploit
Timeline
Related Vulnerabilities
CVE-2024-5452 9.8 pytorch-lightning: RCE via deepdiff Delta deserialization
Same package: torch CVE-2023-43654 9.8 TorchServe: SSRF + RCE via unrestricted model URL loading
Same package: torch CVE-2022-45907 9.8 PyTorch: RCE via unsafe eval in JIT annotations
Same package: torch CVE-2022-0845 9.8 pytorch-lightning: code injection enables full RCE
Same package: torch CVE-2024-35198 9.8 TorchServe: URL bypass enables arbitrary model loading
Same package: torch
AI Threat Alert