CVE-2025-2999: PyTorch: memory corruption in RNN sequence unpacking

MEDIUM PoC AVAILABLE
Published March 31, 2025
CISO Take

A local attacker with low privileges on shared ML compute can trigger memory corruption in PyTorch 2.6.0's RNN sequence handling, risking training data or credential exposure from process memory. Multi-tenant GPU clusters, shared Jupyter environments, and CI/CD pipelines running RNN-based workloads carry the highest exposure. Upgrade away from PyTorch 2.6.0 once a patch lands (track issue #149622) and enforce workload isolation in the interim.

Risk Assessment

Moderate organizational risk despite a CVSS 5.3 score. The local attack vector (AV:L) substantially limits exposure in isolated single-user environments, but multi-tenant ML infrastructure—shared Jupyter hubs, HPC clusters, SageMaker Studio shared kernels—eliminates this barrier entirely. CWE-119 memory corruption primitives occasionally yield primitives beyond initial CVSS scoring by skilled attackers. No active exploitation or KEV listing reduces urgency, but PyTorch's prevalence across virtually every AI/ML stack makes patch lag a meaningful supply chain risk.

Affected Systems

Package Ecosystem Vulnerable Range Patched
pytorch pip No patch
99.8K OpenSSF 6.4 21.9K dependents Pushed today 8% patched ~142d to patch Full package profile →

Do you use pytorch? You're affected.

Severity & Risk

CVSS 3.1
5.3 / 10
EPSS
0.1%
chance of exploitation in 30 days
Higher than 26% of all CVEs
Exploitation Status
Exploit Available
Exploitation: MEDIUM
Sophistication
Moderate
Exploitation Confidence
medium
Public PoC indexed (trickest/cve)
Composite signal derived from CISA KEV, CISA SSVC, EPSS, trickest/cve, and Nuclei templates.

Attack Surface

AV AC PR UI S C I A
AV Local
AC Low
PR Low
UI None
S Unchanged
C Low
I Low
A Low

Recommended Action

6 steps
  1. Pin PyTorch below 2.6.0 or freeze at an unaffected version until an official patch releases; monitor pytorch/pytorch issue #149622 for status.

  2. Isolate training workloads per user using separate containers or VMs in any multi-tenant compute environment.

  3. Apply least-privilege: revoke unnecessary local shell access to ML training nodes.

  4. Audit codebases for torch.nn.utils.rnn.unpack_sequence and pack_padded_sequence call sites.

  5. Monitor PyTorch training processes for unexpected crashes or segmentation faults as indicators of exploitation attempts.

  6. Add PyTorch version pinning and vulnerability scanning to your ML dependency pipeline (pip-audit, Safety, Dependabot).

CISA SSVC Assessment

Decision Track
Exploitation none
Automatable No
Technical Impact partial

Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.

Classification

Code Execution Data Extraction Supply Chain Framework Training Data Inference AML.T0010.001 AML.T0025 AML.T0037

Compliance Impact

This CVE is relevant to:

EU AI Act
Art. 9 - Risk Management System
ISO 42001
A.6.1 - AI risk management processes
NIST AI RMF
MANAGE 2.4 - Vulnerability and Incident Response for AI Systems
OWASP LLM Top 10
LLM03:2025 - Supply Chain

Frequently Asked Questions

What is CVE-2025-2999?

A local attacker with low privileges on shared ML compute can trigger memory corruption in PyTorch 2.6.0's RNN sequence handling, risking training data or credential exposure from process memory. Multi-tenant GPU clusters, shared Jupyter environments, and CI/CD pipelines running RNN-based workloads carry the highest exposure. Upgrade away from PyTorch 2.6.0 once a patch lands (track issue #149622) and enforce workload isolation in the interim.

Is CVE-2025-2999 actively exploited?

Proof-of-concept exploit code is publicly available for CVE-2025-2999, increasing the risk of exploitation.

How to fix CVE-2025-2999?

1. Pin PyTorch below 2.6.0 or freeze at an unaffected version until an official patch releases; monitor pytorch/pytorch issue #149622 for status. 2. Isolate training workloads per user using separate containers or VMs in any multi-tenant compute environment. 3. Apply least-privilege: revoke unnecessary local shell access to ML training nodes. 4. Audit codebases for `torch.nn.utils.rnn.unpack_sequence` and `pack_padded_sequence` call sites. 5. Monitor PyTorch training processes for unexpected crashes or segmentation faults as indicators of exploitation attempts. 6. Add PyTorch version pinning and vulnerability scanning to your ML dependency pipeline (pip-audit, Safety, Dependabot).

What systems are affected by CVE-2025-2999?

This vulnerability affects the following AI/ML architecture patterns: training pipelines, sequence-to-sequence models, multi-tenant ML platforms, NLP inference pipelines, time-series forecasting pipelines.

What is the CVSS score for CVE-2025-2999?

CVE-2025-2999 has a CVSS v3.1 base score of 5.3 (MEDIUM). The EPSS exploitation probability is 0.09%.

Technical Details

NVD Description

A vulnerability was found in PyTorch 2.6.0. It has been rated as critical. Affected by this issue is the function torch.nn.utils.rnn.unpack_sequence. The manipulation leads to memory corruption. Attacking locally is a requirement. The exploit has been disclosed to the public and may be used.

Exploitation Scenario

A rogue insider or attacker who has compromised an ML engineer account on a shared HPC training cluster crafts a malformed packed sequence tensor and feeds it to a training script that calls `torch.nn.utils.rnn.unpack_sequence`. The memory corruption corrupts heap adjacent to active tensor allocations, potentially leaking portions of other tenants' training data batches or authentication tokens loaded into the same process. In a shared Jupyter environment, this could expose proprietary training datasets across user boundaries. A more advanced attacker attempts to convert the write primitive into arbitrary code execution within the training node, pivoting to exfiltrate model weights or inject poisoned gradients.

Weaknesses (CWE)

CVSS Vector

CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:L/I:L/A:L

Timeline

Published
March 31, 2025
Last Modified
May 29, 2025
First Seen
March 31, 2025

Related Vulnerabilities