CVE-2025-9905: Keras: safe_mode bypass enables RCE via .h5 model files

GHSA-36rr-ww3j-vrjv HIGH PoC AVAILABLE
Published September 19, 2025
CISO Take

Teams loading Keras models in .h5 format with safe_mode=True are NOT protected — the flag is silently ignored for the legacy HDF5 format, allowing arbitrary code execution. Any pipeline that ingests third-party or user-supplied .h5 model files is exposed. Patch to Keras 3.11.3 immediately and audit all untrusted model ingestion workflows.

Risk Assessment

Despite a CVSS of 7.3 (local vector), the real-world risk in ML environments is higher than the score suggests. Model files in .h5 format are routinely shared across teams, uploaded to model hubs, or pulled from external sources — 'local' execution happens naturally as part of normal MLOps workflows. The critical issue is the false security promise: safe_mode=True is a documented protective control, and its silent failure creates a dangerous blind spot for security-conscious teams who believed they had mitigated this class of risk.

Affected Systems

Package Ecosystem Vulnerable Range Patched
keras pip No patch
64.1K OpenSSF 7.1 1.5K dependents Pushed yesterday 53% patched ~32d to patch Full package profile →
keras pip >= 3.0.0, < 3.11.3 3.11.3
64.1K OpenSSF 7.1 1.5K dependents Pushed yesterday 53% patched ~32d to patch Full package profile →

Severity & Risk

CVSS 3.1
7.3 / 10
EPSS
0.0%
chance of exploitation in 30 days
Higher than 0% of all CVEs
Exploitation Status
Exploit Available
Exploitation: MEDIUM
Sophistication
Moderate
Exploitation Confidence
medium
Public PoC indexed (trickest/cve)
Composite signal derived from CISA KEV, CISA SSVC, EPSS, trickest/cve, and Nuclei templates.

Attack Surface

AV AC PR UI S C I A
AV Local
AC Low
PR Low
UI Required
S Unchanged
C High
I High
A High

Recommended Action

6 steps
  1. PATCH

    Upgrade to Keras >= 3.11.3 immediately.

  2. WORKAROUND

    Migrate model storage from .h5/.hdf5 to the SavedModel or .keras format — these formats do not use pickle deserialization.

  3. CONTROLS

    Enforce cryptographic signing and verification for all model artifacts before loading.

  4. DETECTION

    Audit logs for Model.load_model calls on .h5 files; instrument with file integrity monitoring on model directories.

  5. POLICY

    Block loading of .h5 files from untrusted sources (external model hubs, user uploads) at the application layer until patched.

  6. VERIFY

    Scan all .h5 files in model registries for unexpected Lambda layers using h5py inspection tools.

CISA SSVC Assessment

Decision Track
Exploitation none
Automatable No
Technical Impact total

Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Article 17 - Quality management system — third-party components Article 9 - Risk management system
ISO 42001
8.4 - AI system lifecycle processes A.10.5 - Suppliers and third-party AI relationships
NIST AI RMF
MANAGE 2.4 - Treatments and responses for AI risks from third-party components MAP 2.3 - AI system third-party entities and AI supply chain tracking
OWASP LLM Top 10
LLM05 - Supply Chain Vulnerabilities

Frequently Asked Questions

What is CVE-2025-9905?

Teams loading Keras models in .h5 format with safe_mode=True are NOT protected — the flag is silently ignored for the legacy HDF5 format, allowing arbitrary code execution. Any pipeline that ingests third-party or user-supplied .h5 model files is exposed. Patch to Keras 3.11.3 immediately and audit all untrusted model ingestion workflows.

Is CVE-2025-9905 actively exploited?

Proof-of-concept exploit code is publicly available for CVE-2025-9905, increasing the risk of exploitation.

How to fix CVE-2025-9905?

1. PATCH: Upgrade to Keras >= 3.11.3 immediately. 2. WORKAROUND: Migrate model storage from .h5/.hdf5 to the SavedModel or .keras format — these formats do not use pickle deserialization. 3. CONTROLS: Enforce cryptographic signing and verification for all model artifacts before loading. 4. DETECTION: Audit logs for Model.load_model calls on .h5 files; instrument with file integrity monitoring on model directories. 5. POLICY: Block loading of .h5 files from untrusted sources (external model hubs, user uploads) at the application layer until patched. 6. VERIFY: Scan all .h5 files in model registries for unexpected Lambda layers using h5py inspection tools.

What systems are affected by CVE-2025-9905?

This vulnerability affects the following AI/ML architecture patterns: training pipelines, model serving, model registries, MLOps pipelines, collaborative ML environments.

What is the CVSS score for CVE-2025-9905?

CVE-2025-9905 has a CVSS v3.1 base score of 7.3 (HIGH). The EPSS exploitation probability is 0.01%.

Technical Details

NVD Description

The Keras Model.load_model method can be exploited to achieve arbitrary code execution, even with safe_mode=True. One can create a specially crafted .h5/.hdf5 model archive that, when loaded via Model.load_model, will trigger arbitrary code to be executed. This is achieved by crafting a special .h5 archive file that uses the Lambda layer feature of keras which allows arbitrary Python code in the form of pickled code. The vulnerability comes from the fact that the safe_mode=True option is not honored when reading .h5 archives. Note that the .h5/.hdf5 format is a legacy format supported by Keras 3 for backwards compatibility.

Exploitation Scenario

An adversary targeting an organization's ML platform crafts a malicious .h5 model file by creating a Keras model with a Lambda layer containing pickled Python code (reverse shell, credential harvester, or cryptominer). They distribute it via a public model hub or spearphish an ML engineer with a 'pretrained model' for a popular task. The victim, following secure coding practices, calls Model.load_model('model.h5', safe_mode=True) — believing they're protected. Keras silently ignores safe_mode for .h5 files, deserializes the pickle payload, and executes the attacker's code with the privileges of the ML pipeline process. In a cloud MLOps environment, this typically yields access to training data, model weights, and cloud credentials via the instance metadata service.

CVSS Vector

CVSS:3.1/AV:L/AC:L/PR:L/UI:R/S:U/C:H/I:H/A:H

Timeline

Published
September 19, 2025
Last Modified
September 23, 2025
First Seen
September 19, 2025

Related Vulnerabilities