CVE-2025-9905: Keras: safe_mode bypass enables RCE via .h5 model files
GHSA-36rr-ww3j-vrjv HIGH PoC AVAILABLETeams loading Keras models in .h5 format with safe_mode=True are NOT protected — the flag is silently ignored for the legacy HDF5 format, allowing arbitrary code execution. Any pipeline that ingests third-party or user-supplied .h5 model files is exposed. Patch to Keras 3.11.3 immediately and audit all untrusted model ingestion workflows.
Risk Assessment
Despite a CVSS of 7.3 (local vector), the real-world risk in ML environments is higher than the score suggests. Model files in .h5 format are routinely shared across teams, uploaded to model hubs, or pulled from external sources — 'local' execution happens naturally as part of normal MLOps workflows. The critical issue is the false security promise: safe_mode=True is a documented protective control, and its silent failure creates a dangerous blind spot for security-conscious teams who believed they had mitigated this class of risk.
Affected Systems
Severity & Risk
Attack Surface
Recommended Action
6 steps-
PATCH
Upgrade to Keras >= 3.11.3 immediately.
-
WORKAROUND
Migrate model storage from .h5/.hdf5 to the SavedModel or .keras format — these formats do not use pickle deserialization.
-
CONTROLS
Enforce cryptographic signing and verification for all model artifacts before loading.
-
DETECTION
Audit logs for Model.load_model calls on .h5 files; instrument with file integrity monitoring on model directories.
-
POLICY
Block loading of .h5 files from untrusted sources (external model hubs, user uploads) at the application layer until patched.
-
VERIFY
Scan all .h5 files in model registries for unexpected Lambda layers using h5py inspection tools.
CISA SSVC Assessment
Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2025-9905?
Teams loading Keras models in .h5 format with safe_mode=True are NOT protected — the flag is silently ignored for the legacy HDF5 format, allowing arbitrary code execution. Any pipeline that ingests third-party or user-supplied .h5 model files is exposed. Patch to Keras 3.11.3 immediately and audit all untrusted model ingestion workflows.
Is CVE-2025-9905 actively exploited?
Proof-of-concept exploit code is publicly available for CVE-2025-9905, increasing the risk of exploitation.
How to fix CVE-2025-9905?
1. PATCH: Upgrade to Keras >= 3.11.3 immediately. 2. WORKAROUND: Migrate model storage from .h5/.hdf5 to the SavedModel or .keras format — these formats do not use pickle deserialization. 3. CONTROLS: Enforce cryptographic signing and verification for all model artifacts before loading. 4. DETECTION: Audit logs for Model.load_model calls on .h5 files; instrument with file integrity monitoring on model directories. 5. POLICY: Block loading of .h5 files from untrusted sources (external model hubs, user uploads) at the application layer until patched. 6. VERIFY: Scan all .h5 files in model registries for unexpected Lambda layers using h5py inspection tools.
What systems are affected by CVE-2025-9905?
This vulnerability affects the following AI/ML architecture patterns: training pipelines, model serving, model registries, MLOps pipelines, collaborative ML environments.
What is the CVSS score for CVE-2025-9905?
CVE-2025-9905 has a CVSS v3.1 base score of 7.3 (HIGH). The EPSS exploitation probability is 0.01%.
Technical Details
NVD Description
The Keras Model.load_model method can be exploited to achieve arbitrary code execution, even with safe_mode=True. One can create a specially crafted .h5/.hdf5 model archive that, when loaded via Model.load_model, will trigger arbitrary code to be executed. This is achieved by crafting a special .h5 archive file that uses the Lambda layer feature of keras which allows arbitrary Python code in the form of pickled code. The vulnerability comes from the fact that the safe_mode=True option is not honored when reading .h5 archives. Note that the .h5/.hdf5 format is a legacy format supported by Keras 3 for backwards compatibility.
Exploitation Scenario
An adversary targeting an organization's ML platform crafts a malicious .h5 model file by creating a Keras model with a Lambda layer containing pickled Python code (reverse shell, credential harvester, or cryptominer). They distribute it via a public model hub or spearphish an ML engineer with a 'pretrained model' for a popular task. The victim, following secure coding practices, calls Model.load_model('model.h5', safe_mode=True) — believing they're protected. Keras silently ignores safe_mode for .h5 files, deserializes the pickle payload, and executes the attacker's code with the privileges of the ML pipeline process. In a cloud MLOps environment, this typically yields access to training data, model weights, and cloud credentials via the instance metadata service.
CVSS Vector
CVSS:3.1/AV:L/AC:L/PR:L/UI:R/S:U/C:H/I:H/A:H References
Timeline
Related Vulnerabilities
CVE-2025-49655 9.8 keras: Deserialization enables RCE
Same package: keras CVE-2025-1550 9.8 Keras: safe_mode bypass enables RCE via model loading
Same package: keras CVE-2024-3660 9.8 Keras: RCE via malicious model deserialization
Same package: keras CVE-2024-49326 9.8 Affiliator WP Plugin: Unauthenticated Web Shell Upload
Same package: keras CVE-2025-12060 9.8 keras: Path Traversal enables file access
Same package: keras
AI Threat Alert