CVE-2025-1550: Keras: safe_mode bypass enables RCE via model loading
GHSA-48g7-3x6r-xfhp CRITICAL PoC AVAILABLE CISA: ATTENDAny organization running Keras 3.x that loads .keras model files must patch to 3.9.0 immediately — the safe_mode=True flag, often cited as a security control, is completely bypassed. This is particularly dangerous in MLOps pipelines and model registries that ingest externally sourced models. Treat any .keras file loaded from outside your trust boundary as a potential code execution vector until patched.
Risk Assessment
Extremely high. CVSS 9.8 with network attack vector, zero authentication required, and zero user interaction needed beyond the normal act of loading a model — which is standard, trusted behavior in ML workflows. The safe_mode bypass is the critical aggravating factor: security-conscious teams may have relied on this flag as a compensating control, creating a false sense of security. Exploit complexity is low; a proof-of-concept writeup is already public. EPSS of 4.8% suggests active exploitation interest. AI/ML systems are disproportionately exposed because model loading is a core, frequent operation trusted implicitly.
Affected Systems
Severity & Risk
Attack Surface
Recommended Action
7 steps-
Patch immediately
Upgrade Keras to 3.9.0 or later. Run
pip install keras>=3.9.0across all environments (dev, staging, prod). -
Audit model sources
Inventory all locations where .keras files are loaded from. Block loading from untrusted sources at the pipeline level.
-
Remove safe_mode reliance
Do not treat safe_mode=True as a security boundary — it is not. Remove any security documentation or runbooks that cite it as a control.
-
Implement model signing
Enforce cryptographic signing and verification of model artifacts before loading. Consider tools like Sigstore or internal PKI for model provenance.
-
Sandboxed model loading
Run model loading in isolated containers/VMs with minimal filesystem and network access. Use seccomp profiles to restrict syscalls.
-
Detection
Alert on unexpected child process creation, outbound network connections, or file writes during model loading operations. Monitor for subprocess, os.system, eval, exec calls in Python processes handling model files.
-
Model registry controls
Enforce that only models loaded from internal, verified registries (e.g., MLflow with integrity checks) are used in production.
CISA SSVC Assessment
Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2025-1550?
Any organization running Keras 3.x that loads .keras model files must patch to 3.9.0 immediately — the safe_mode=True flag, often cited as a security control, is completely bypassed. This is particularly dangerous in MLOps pipelines and model registries that ingest externally sourced models. Treat any .keras file loaded from outside your trust boundary as a potential code execution vector until patched.
Is CVE-2025-1550 actively exploited?
Proof-of-concept exploit code is publicly available for CVE-2025-1550, increasing the risk of exploitation.
How to fix CVE-2025-1550?
1. **Patch immediately**: Upgrade Keras to 3.9.0 or later. Run `pip install keras>=3.9.0` across all environments (dev, staging, prod). 2. **Audit model sources**: Inventory all locations where .keras files are loaded from. Block loading from untrusted sources at the pipeline level. 3. **Remove safe_mode reliance**: Do not treat safe_mode=True as a security boundary — it is not. Remove any security documentation or runbooks that cite it as a control. 4. **Implement model signing**: Enforce cryptographic signing and verification of model artifacts before loading. Consider tools like Sigstore or internal PKI for model provenance. 5. **Sandboxed model loading**: Run model loading in isolated containers/VMs with minimal filesystem and network access. Use seccomp profiles to restrict syscalls. 6. **Detection**: Alert on unexpected child process creation, outbound network connections, or file writes during model loading operations. Monitor for subprocess, os.system, eval, exec calls in Python processes handling model files. 7. **Model registry controls**: Enforce that only models loaded from internal, verified registries (e.g., MLflow with integrity checks) are used in production.
What systems are affected by CVE-2025-1550?
This vulnerability affects the following AI/ML architecture patterns: training pipelines, model serving, MLOps platforms, model registries, CI/CD pipelines.
What is the CVSS score for CVE-2025-1550?
CVE-2025-1550 has a CVSS v3.1 base score of 9.8 (CRITICAL). The EPSS exploitation probability is 7.97%.
Technical Details
NVD Description
The Keras Model.load_model function permits arbitrary code execution, even with safe_mode=True, through a manually constructed, malicious .keras archive. By altering the config.json file within the archive, an attacker can specify arbitrary Python modules and functions, along with their arguments, to be loaded and executed during model loading.
Exploitation Scenario
An adversary targets an organization using Keras for LLM fine-tuning or inference. They publish a 'fine-tuned LLaMA adapter' on a public model hub, or send a model file via a phishing email to an ML engineer. The .keras archive contains a crafted config.json specifying `__class_name__: subprocess.Popen` with arguments establishing a reverse shell. The engineer loads the model — even explicitly passing `safe_mode=True` — and within seconds the attacker has an interactive shell running as the ML service account. From there they pivot to the training data S3 bucket, exfiltrate model weights (IP theft), or implant a backdoor in production inference services. The entire attack chain requires zero prior access and is triggered by a single, routine ML workflow action.
CVSS Vector
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H References
- github.com/keras-team/keras/pull/20751 Issue Patch
- towerofhanoi.it/writeups/cve-2025-1550/ Exploit 3rd Party
- github.com/advisories/GHSA-48g7-3x6r-xfhp
- github.com/keras-team/keras/commit/e67ac8ffd0c883bec68eb65bb52340c7f9d3a903
- github.com/keras-team/keras/releases/tag/v3.9.0
- github.com/keras-team/keras/security/advisories/GHSA-48g7-3x6r-xfhp
- nvd.nist.gov/vuln/detail/CVE-2025-1550
- github.com/fardeen-ahmed/Bug-bounty-Writeups Exploit
- github.com/fkie-cad/nvd-json-data-feeds Exploit
- github.com/gpxlnx/medium-writeup Exploit
- github.com/hsamnguyen/lastest-update-bounty Exploit
- github.com/insecrez/Bug-bounty-Writeups Exploit
- github.com/necst/security-model-sharing Exploit
- github.com/pwnfuzz/commithunter Exploit
- github.com/rix4uni/medium-writeups Exploit
Timeline
Related Vulnerabilities
CVE-2025-49655 9.8 keras: Deserialization enables RCE
Same package: keras CVE-2025-12060 9.8 keras: Path Traversal enables file access
Same package: keras CVE-2024-3660 9.8 Keras: RCE via malicious model deserialization
Same package: keras CVE-2024-49326 9.8 Affiliator WP Plugin: Unauthenticated Web Shell Upload
Same package: keras CVE-2026-1462 8.8 Keras: safe_mode bypass allows RCE via model deserialization
Same package: keras
AI Threat Alert