CVE-2025-12058: Keras: safe_mode bypass enables file read and SSRF

GHSA-mq84-hjqx-cwf2 MEDIUM PoC AVAILABLE
Published October 29, 2025
CISO Take

If your ML infrastructure loads Keras model files from external or untrusted sources, you are exposed even when safe_mode=True is explicitly set — the documented security control is ineffective. A crafted .keras archive can silently exfiltrate local files (API keys, cloud credentials, /etc/passwd) or pivot to internal services via SSRF using cloud storage handlers. Patch to keras >= 3.12.0 immediately and treat any model file from an uncontrolled source as untrusted input until verified.

Risk Assessment

Contextual risk is higher than the medium CVSS baseline suggests. The critical factor is the bypass of safe_mode=True — a control that organizations explicitly invoke when loading potentially untrusted models, meaning defenders have a false sense of security. SSRF via tf.io.gfile is particularly dangerous in cloud environments where GCS/S3/HDFS handlers and the EC2/GCP metadata service (169.254.169.254) are reachable from within training or serving infrastructure. Exposure is significant wherever model-sharing workflows exist: MLOps platforms, collaborative notebooks, model hubs, or CI/CD pipelines that pull models from external registries. EPSS is currently low (0.00076) but the attack is trivial to craft once the researcher disclosure is public.

Affected Systems

Package Ecosystem Vulnerable Range Patched
keras pip < 3.12.0 3.12.0
64.1K OpenSSF 7.0 1.5K dependents Pushed 7d ago 53% patched ~32d to patch Full package profile →

Do you use keras? You're affected.

Severity & Risk

CVSS 3.1
N/A
EPSS
0.1%
chance of exploitation in 30 days
Higher than 19% of all CVEs
Exploitation Status
Exploit Available
Exploitation: MEDIUM
Sophistication
Trivial
Exploitation Confidence
medium
Public PoC indexed (trickest/cve)
Composite signal derived from CISA KEV, CISA SSVC, EPSS, trickest/cve, and Nuclei templates.

Recommended Action

6 steps
  1. PATCH

    Upgrade keras to >= 3.12.0 immediately. Verify with pip show keras.

  2. AUDIT

    Inventory all code paths using keras.Model.load_model() or tf.keras.models.load_model() and identify where model files originate.

  3. ISOLATE

    Until patched, run model loading in network-restricted environments (no outbound HTTP/HTTPS, no access to metadata endpoints). Block 169.254.169.254 at the host/container level.

  4. VALIDATE INPUT

    Never accept .keras files from untrusted sources without sandbox inspection. Consider hash verification against a trusted artifact registry.

  5. DETECT

    Monitor for unexpected outbound HTTP requests or unusual file reads (esp. /etc/, ~/.ssh/, credential files) from ML training/serving processes.

  6. WORKAROUND (if patch is not immediately possible): Wrap load_model calls in a sandboxed subprocess with restricted filesystem and network access using seccomp/namespaces or gVisor.

CISA SSVC Assessment

Decision Track
Exploitation none
Automatable No
Technical Impact partial

Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Article 15(5) - Cybersecurity requirements for high-risk AI systems Article 9 - Risk management system
ISO 42001
6.1.2 - AI risk assessment A.6.2.3 - AI system security
NIST AI RMF
GOVERN 1.2 - Accountability structures for AI risk MANAGE 2.2 - Mechanisms to sustain oversight of AI risk responses
OWASP LLM Top 10
LLM05:2025 - Supply Chain Vulnerabilities

Frequently Asked Questions

What is CVE-2025-12058?

If your ML infrastructure loads Keras model files from external or untrusted sources, you are exposed even when safe_mode=True is explicitly set — the documented security control is ineffective. A crafted .keras archive can silently exfiltrate local files (API keys, cloud credentials, /etc/passwd) or pivot to internal services via SSRF using cloud storage handlers. Patch to keras >= 3.12.0 immediately and treat any model file from an uncontrolled source as untrusted input until verified.

Is CVE-2025-12058 actively exploited?

Proof-of-concept exploit code is publicly available for CVE-2025-12058, increasing the risk of exploitation.

How to fix CVE-2025-12058?

1. PATCH: Upgrade keras to >= 3.12.0 immediately. Verify with pip show keras. 2. AUDIT: Inventory all code paths using keras.Model.load_model() or tf.keras.models.load_model() and identify where model files originate. 3. ISOLATE: Until patched, run model loading in network-restricted environments (no outbound HTTP/HTTPS, no access to metadata endpoints). Block 169.254.169.254 at the host/container level. 4. VALIDATE INPUT: Never accept .keras files from untrusted sources without sandbox inspection. Consider hash verification against a trusted artifact registry. 5. DETECT: Monitor for unexpected outbound HTTP requests or unusual file reads (esp. /etc/, ~/.ssh/, credential files) from ML training/serving processes. 6. WORKAROUND (if patch is not immediately possible): Wrap load_model calls in a sandboxed subprocess with restricted filesystem and network access using seccomp/namespaces or gVisor.

What systems are affected by CVE-2025-12058?

This vulnerability affects the following AI/ML architecture patterns: model serving, training pipelines, MLOps platforms, model registries, collaborative notebooks.

What is the CVSS score for CVE-2025-12058?

No CVSS score has been assigned yet.

Technical Details

NVD Description

The Keras.Model.load_model method, including when executed with the intended security mitigation safe_mode=True, is vulnerable to arbitrary local file loading and Server-Side Request Forgery (SSRF). This vulnerability stems from the way the StringLookup layer is handled during model loading from a specially crafted .keras archive. The constructor for the StringLookup layer accepts a vocabulary argument that can specify a local file path or a remote file path. * Arbitrary Local File Read: An attacker can create a malicious .keras file that embeds a local path in the StringLookup layer's configuration. When the model is loaded, Keras will attempt to read the content of the specified local file and incorporate it into the model state (e.g., retrievable via get_vocabulary()), allowing an attacker to read arbitrary local files on the hosting system. * Server-Side Request Forgery (SSRF): Keras utilizes tf.io.gfile for file operations. Since tf.io.gfile supports remote filesystem handlers (such as GCS and HDFS) and HTTP/HTTPS protocols, the same mechanism can be leveraged to fetch content from arbitrary network endpoints on the server's behalf, resulting in an SSRF condition. The security issue is that the feature allowing external path loading was not properly restricted by the safe_mode=True flag, which was intended to prevent such unintended data access.

Exploitation Scenario

An attacker targets an ML platform that accepts user-submitted .keras model files for evaluation or fine-tuning. They craft a malicious .keras archive containing a StringLookup layer with vocabulary set to a local file path (e.g., /var/run/secrets/kubernetes.io/serviceaccount/token or /root/.aws/credentials). The victim platform calls keras.Model.load_model('malicious.keras', safe_mode=True) — believing safe_mode protects them. Keras reads the target file and populates the vocabulary state. The attacker retrieves the exfiltrated content via the model's get_vocabulary() API call or inference response. In a cloud environment, the attacker alternatively sets vocabulary to https://169.254.169.254/latest/meta-data/iam/security-credentials/ to harvest IAM credentials via SSRF, enabling lateral movement across the cloud account.

Timeline

Published
October 29, 2025
Last Modified
October 30, 2025
First Seen
October 29, 2025

Related Vulnerabilities