CVE-2025-9906: Keras: safe_mode bypass enables RCE via model load

GHSA-36fq-jgmw-4r9c HIGH PoC AVAILABLE
Published September 19, 2025
CISO Take

Any environment loading Keras models — including MLOps pipelines, Jupyter notebooks, and model-serving infrastructure — is vulnerable to arbitrary code execution if it processes untrusted .keras files, regardless of safe_mode=True. Upgrade to Keras 3.11.0 immediately and enforce a policy of never loading models from unverified sources. Until patched, treat every .keras archive as an untrusted executable.

Risk Assessment

CVSS 7.3 understates the real-world risk in AI/ML contexts. The safe_mode bypass is a high-confidence, medium-sophistication attack that undermines the primary security control Keras provides for model loading. ML engineers routinely download pre-trained models from public registries (HuggingFace, Kaggle), creating a broad supply chain exposure surface. A single malicious model distributed to a team or published in a shared model registry can compromise entire ML platforms. The local attack vector and user interaction requirements reduce opportunistic risk but are easily satisfied in ML workflows where model sharing is standard practice.

Affected Systems

Package Ecosystem Vulnerable Range Patched
keras pip No patch
64.1K OpenSSF 7.0 1.5K dependents Pushed 7d ago 53% patched ~32d to patch Full package profile →
keras pip < 3.11.0 3.11.0
64.1K OpenSSF 7.0 1.5K dependents Pushed 7d ago 53% patched ~32d to patch Full package profile →

Severity & Risk

CVSS 3.1
7.3 / 10
EPSS
0.1%
chance of exploitation in 30 days
Higher than 17% of all CVEs
Exploitation Status
Exploit Available
Exploitation: MEDIUM
Sophistication
Moderate
Exploitation Confidence
medium
Public PoC indexed (trickest/cve)
Composite signal derived from CISA KEV, CISA SSVC, EPSS, trickest/cve, and Nuclei templates.

Attack Surface

AV AC PR UI S C I A
AV Local
AC Low
PR Low
UI Required
S Unchanged
C High
I High
A High

Recommended Action

7 steps
  1. PATCH

    Upgrade Keras to >= 3.11.0 immediately. Verify with: pip show keras | grep Version.

  2. INVENTORY

    Identify all code paths calling Model.load_model() across your ML infrastructure, notebooks, and serving code.

  3. SOURCE CONTROL

    Enforce model provenance — only load models from internal, access-controlled registries. Prohibit direct loading from public URLs or unverified external sources.

  4. SIGNING

    Implement model artifact signing (e.g., using Sigstore/cosign or internal PKI) to verify model integrity before loading.

  5. SCANNING

    Inspect .keras archives (they are ZIP files) — examine config.json for calls to keras.config.enable_unsafe_deserialization() as an IOC.

  6. SANDBOXING

    Where feasible, load untrusted models in isolated containers or VMs with no network access and minimal filesystem permissions.

  7. DETECT

    Alert on unexpected Python process spawns originating from model-loading services.

CISA SSVC Assessment

Decision Track
Exploitation none
Automatable No
Technical Impact total

Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Art. 15 - Accuracy, robustness and cybersecurity Art. 9 - Risk management system
ISO 42001
A.6.2.6 - AI system supply chain management A.9.3 - Security of AI system components
NIST AI RMF
GOVERN 6.1 - Policies and procedures for AI supply chain MANAGE 2.2 - Mechanisms to sustain treatment of identified AI risks
OWASP LLM Top 10
LLM03:2025 - Supply Chain Vulnerabilities

Frequently Asked Questions

What is CVE-2025-9906?

Any environment loading Keras models — including MLOps pipelines, Jupyter notebooks, and model-serving infrastructure — is vulnerable to arbitrary code execution if it processes untrusted .keras files, regardless of safe_mode=True. Upgrade to Keras 3.11.0 immediately and enforce a policy of never loading models from unverified sources. Until patched, treat every .keras archive as an untrusted executable.

Is CVE-2025-9906 actively exploited?

Proof-of-concept exploit code is publicly available for CVE-2025-9906, increasing the risk of exploitation.

How to fix CVE-2025-9906?

1. PATCH: Upgrade Keras to >= 3.11.0 immediately. Verify with: pip show keras | grep Version. 2. INVENTORY: Identify all code paths calling Model.load_model() across your ML infrastructure, notebooks, and serving code. 3. SOURCE CONTROL: Enforce model provenance — only load models from internal, access-controlled registries. Prohibit direct loading from public URLs or unverified external sources. 4. SIGNING: Implement model artifact signing (e.g., using Sigstore/cosign or internal PKI) to verify model integrity before loading. 5. SCANNING: Inspect .keras archives (they are ZIP files) — examine config.json for calls to keras.config.enable_unsafe_deserialization() as an IOC. 6. SANDBOXING: Where feasible, load untrusted models in isolated containers or VMs with no network access and minimal filesystem permissions. 7. DETECT: Alert on unexpected Python process spawns originating from model-loading services.

What systems are affected by CVE-2025-9906?

This vulnerability affects the following AI/ML architecture patterns: model serving, training pipelines, MLOps / CI-CD pipelines, research and notebook environments, model registries and artifact stores.

What is the CVSS score for CVE-2025-9906?

CVE-2025-9906 has a CVSS v3.1 base score of 7.3 (HIGH). The EPSS exploitation probability is 0.06%.

Technical Details

NVD Description

The Keras Model.load_model method can be exploited to achieve arbitrary code execution, even with safe_mode=True. One can create a specially crafted .keras model archive that, when loaded via Model.load_model, will trigger arbitrary code to be executed. This is achieved by crafting a special config.json (a file within the .keras archive) that will invoke keras.config.enable_unsafe_deserialization() to disable safe mode. Once safe mode is disable, one can use the Lambda layer feature of keras, which allows arbitrary Python code in the form of pickled code. Both can appear in the same archive. Simply the keras.config.enable_unsafe_deserialization() needs to appear first in the archive and the Lambda with arbitrary code needs to be second.

Exploitation Scenario

An adversary publishes a malicious Keras model to a public repository (HuggingFace, GitHub, or a shared internal model registry) disguised as a legitimate fine-tuned model (e.g., a BERT variant or image classifier). A data scientist or automated pipeline downloads and loads the model with Model.load_model('model.keras') believing safe_mode=True provides protection. The crafted config.json in the archive is processed first, invoking keras.config.enable_unsafe_deserialization() to silently disable safe mode. The subsequent Lambda layer containing pickled malicious Python code then executes with full privileges of the loading process — enabling credential theft, data exfiltration, reverse shell establishment, or lateral movement into the ML platform. In CI/CD contexts, this can compromise build environments and inject backdoors into subsequently trained models.

CVSS Vector

CVSS:3.1/AV:L/AC:L/PR:L/UI:R/S:U/C:H/I:H/A:H

Timeline

Published
September 19, 2025
Last Modified
September 23, 2025
First Seen
September 19, 2025

Related Vulnerabilities