CVE-2025-9906: Keras: safe_mode bypass enables RCE via model load
GHSA-36fq-jgmw-4r9c HIGH PoC AVAILABLEAny environment loading Keras models — including MLOps pipelines, Jupyter notebooks, and model-serving infrastructure — is vulnerable to arbitrary code execution if it processes untrusted .keras files, regardless of safe_mode=True. Upgrade to Keras 3.11.0 immediately and enforce a policy of never loading models from unverified sources. Until patched, treat every .keras archive as an untrusted executable.
Risk Assessment
CVSS 7.3 understates the real-world risk in AI/ML contexts. The safe_mode bypass is a high-confidence, medium-sophistication attack that undermines the primary security control Keras provides for model loading. ML engineers routinely download pre-trained models from public registries (HuggingFace, Kaggle), creating a broad supply chain exposure surface. A single malicious model distributed to a team or published in a shared model registry can compromise entire ML platforms. The local attack vector and user interaction requirements reduce opportunistic risk but are easily satisfied in ML workflows where model sharing is standard practice.
Affected Systems
Severity & Risk
Attack Surface
Recommended Action
7 steps-
PATCH
Upgrade Keras to >= 3.11.0 immediately. Verify with: pip show keras | grep Version.
-
INVENTORY
Identify all code paths calling Model.load_model() across your ML infrastructure, notebooks, and serving code.
-
SOURCE CONTROL
Enforce model provenance — only load models from internal, access-controlled registries. Prohibit direct loading from public URLs or unverified external sources.
-
SIGNING
Implement model artifact signing (e.g., using Sigstore/cosign or internal PKI) to verify model integrity before loading.
-
SCANNING
Inspect .keras archives (they are ZIP files) — examine config.json for calls to keras.config.enable_unsafe_deserialization() as an IOC.
-
SANDBOXING
Where feasible, load untrusted models in isolated containers or VMs with no network access and minimal filesystem permissions.
-
DETECT
Alert on unexpected Python process spawns originating from model-loading services.
CISA SSVC Assessment
Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2025-9906?
Any environment loading Keras models — including MLOps pipelines, Jupyter notebooks, and model-serving infrastructure — is vulnerable to arbitrary code execution if it processes untrusted .keras files, regardless of safe_mode=True. Upgrade to Keras 3.11.0 immediately and enforce a policy of never loading models from unverified sources. Until patched, treat every .keras archive as an untrusted executable.
Is CVE-2025-9906 actively exploited?
Proof-of-concept exploit code is publicly available for CVE-2025-9906, increasing the risk of exploitation.
How to fix CVE-2025-9906?
1. PATCH: Upgrade Keras to >= 3.11.0 immediately. Verify with: pip show keras | grep Version. 2. INVENTORY: Identify all code paths calling Model.load_model() across your ML infrastructure, notebooks, and serving code. 3. SOURCE CONTROL: Enforce model provenance — only load models from internal, access-controlled registries. Prohibit direct loading from public URLs or unverified external sources. 4. SIGNING: Implement model artifact signing (e.g., using Sigstore/cosign or internal PKI) to verify model integrity before loading. 5. SCANNING: Inspect .keras archives (they are ZIP files) — examine config.json for calls to keras.config.enable_unsafe_deserialization() as an IOC. 6. SANDBOXING: Where feasible, load untrusted models in isolated containers or VMs with no network access and minimal filesystem permissions. 7. DETECT: Alert on unexpected Python process spawns originating from model-loading services.
What systems are affected by CVE-2025-9906?
This vulnerability affects the following AI/ML architecture patterns: model serving, training pipelines, MLOps / CI-CD pipelines, research and notebook environments, model registries and artifact stores.
What is the CVSS score for CVE-2025-9906?
CVE-2025-9906 has a CVSS v3.1 base score of 7.3 (HIGH). The EPSS exploitation probability is 0.06%.
Technical Details
NVD Description
The Keras Model.load_model method can be exploited to achieve arbitrary code execution, even with safe_mode=True. One can create a specially crafted .keras model archive that, when loaded via Model.load_model, will trigger arbitrary code to be executed. This is achieved by crafting a special config.json (a file within the .keras archive) that will invoke keras.config.enable_unsafe_deserialization() to disable safe mode. Once safe mode is disable, one can use the Lambda layer feature of keras, which allows arbitrary Python code in the form of pickled code. Both can appear in the same archive. Simply the keras.config.enable_unsafe_deserialization() needs to appear first in the archive and the Lambda with arbitrary code needs to be second.
Exploitation Scenario
An adversary publishes a malicious Keras model to a public repository (HuggingFace, GitHub, or a shared internal model registry) disguised as a legitimate fine-tuned model (e.g., a BERT variant or image classifier). A data scientist or automated pipeline downloads and loads the model with Model.load_model('model.keras') believing safe_mode=True provides protection. The crafted config.json in the archive is processed first, invoking keras.config.enable_unsafe_deserialization() to silently disable safe mode. The subsequent Lambda layer containing pickled malicious Python code then executes with full privileges of the loading process — enabling credential theft, data exfiltration, reverse shell establishment, or lateral movement into the ML platform. In CI/CD contexts, this can compromise build environments and inject backdoors into subsequently trained models.
Weaknesses (CWE)
CVSS Vector
CVSS:3.1/AV:L/AC:L/PR:L/UI:R/S:U/C:H/I:H/A:H References
- github.com/keras-team/keras/pull/21429 Issue
- github.com/advisories/GHSA-36fq-jgmw-4r9c
- github.com/keras-team/keras/commit/713172ab56b864e59e2aa79b1a51b0e728bba858
- github.com/keras-team/keras/releases/tag/v3.11.0
- nvd.nist.gov/vuln/detail/CVE-2025-9906
- osv.dev/vulnerability/CVE-2025-9906
- github.com/ARPSyndicate/cve-scores Exploit
Timeline
Related Vulnerabilities
CVE-2025-49655 9.8 keras: Deserialization enables RCE
Same package: keras CVE-2025-1550 9.8 Keras: safe_mode bypass enables RCE via model loading
Same package: keras CVE-2024-3660 9.8 Keras: RCE via malicious model deserialization
Same package: keras CVE-2024-49326 9.8 Affiliator WP Plugin: Unauthenticated Web Shell Upload
Same package: keras CVE-2025-12060 9.8 keras: Path Traversal enables file access
Same package: keras
AI Threat Alert