CVE-2025-1550: Keras: safe_mode bypass enables RCE via model loading

GHSA-48g7-3x6r-xfhp CRITICAL PoC AVAILABLE CISA: ATTEND
Published March 11, 2025
CISO Take

Any organization running Keras 3.x that loads .keras model files must patch to 3.9.0 immediately — the safe_mode=True flag, often cited as a security control, is completely bypassed. This is particularly dangerous in MLOps pipelines and model registries that ingest externally sourced models. Treat any .keras file loaded from outside your trust boundary as a potential code execution vector until patched.

Risk Assessment

Extremely high. CVSS 9.8 with network attack vector, zero authentication required, and zero user interaction needed beyond the normal act of loading a model — which is standard, trusted behavior in ML workflows. The safe_mode bypass is the critical aggravating factor: security-conscious teams may have relied on this flag as a compensating control, creating a false sense of security. Exploit complexity is low; a proof-of-concept writeup is already public. EPSS of 4.8% suggests active exploitation interest. AI/ML systems are disproportionately exposed because model loading is a core, frequent operation trusted implicitly.

Affected Systems

Package Ecosystem Vulnerable Range Patched
keras pip No patch
64.1K OpenSSF 7.0 1.5K dependents Pushed 7d ago 53% patched ~32d to patch Full package profile →
keras pip >= 3.0.0, < 3.9.0 3.9.0
64.1K OpenSSF 7.0 1.5K dependents Pushed 7d ago 53% patched ~32d to patch Full package profile →

Severity & Risk

CVSS 3.1
9.8 / 10
EPSS
8.0%
chance of exploitation in 30 days
Higher than 92% of all CVEs
Exploitation Status
Exploit Available
Exploitation: MEDIUM
Sophistication
Trivial
Exploitation Confidence
medium
CISA SSVC: Public PoC
Public PoC indexed (trickest/cve)
Composite signal derived from CISA KEV, CISA SSVC, EPSS, trickest/cve, and Nuclei templates.

Attack Surface

AV AC PR UI S C I A
AV Network
AC Low
PR None
UI None
S Unchanged
C High
I High
A High

Recommended Action

7 steps
  1. Patch immediately

    Upgrade Keras to 3.9.0 or later. Run pip install keras>=3.9.0 across all environments (dev, staging, prod).

  2. Audit model sources

    Inventory all locations where .keras files are loaded from. Block loading from untrusted sources at the pipeline level.

  3. Remove safe_mode reliance

    Do not treat safe_mode=True as a security boundary — it is not. Remove any security documentation or runbooks that cite it as a control.

  4. Implement model signing

    Enforce cryptographic signing and verification of model artifacts before loading. Consider tools like Sigstore or internal PKI for model provenance.

  5. Sandboxed model loading

    Run model loading in isolated containers/VMs with minimal filesystem and network access. Use seccomp profiles to restrict syscalls.

  6. Detection

    Alert on unexpected child process creation, outbound network connections, or file writes during model loading operations. Monitor for subprocess, os.system, eval, exec calls in Python processes handling model files.

  7. Model registry controls

    Enforce that only models loaded from internal, verified registries (e.g., MLflow with integrity checks) are used in production.

CISA SSVC Assessment

Decision Attend
Exploitation poc
Automatable No
Technical Impact total

Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Article 15 - Accuracy, robustness and cybersecurity Article 9 - Risk management system
ISO 42001
A.6.1.6 - AI system supply chain management A.8.5 - AI system security
NIST AI RMF
GOVERN 1.7 - Processes for AI risk identification and management MANAGE 2.2 - Risk treatment and mitigation
OWASP LLM Top 10
LLM03:2025 - Supply Chain Vulnerabilities

Frequently Asked Questions

What is CVE-2025-1550?

Any organization running Keras 3.x that loads .keras model files must patch to 3.9.0 immediately — the safe_mode=True flag, often cited as a security control, is completely bypassed. This is particularly dangerous in MLOps pipelines and model registries that ingest externally sourced models. Treat any .keras file loaded from outside your trust boundary as a potential code execution vector until patched.

Is CVE-2025-1550 actively exploited?

Proof-of-concept exploit code is publicly available for CVE-2025-1550, increasing the risk of exploitation.

How to fix CVE-2025-1550?

1. **Patch immediately**: Upgrade Keras to 3.9.0 or later. Run `pip install keras>=3.9.0` across all environments (dev, staging, prod). 2. **Audit model sources**: Inventory all locations where .keras files are loaded from. Block loading from untrusted sources at the pipeline level. 3. **Remove safe_mode reliance**: Do not treat safe_mode=True as a security boundary — it is not. Remove any security documentation or runbooks that cite it as a control. 4. **Implement model signing**: Enforce cryptographic signing and verification of model artifacts before loading. Consider tools like Sigstore or internal PKI for model provenance. 5. **Sandboxed model loading**: Run model loading in isolated containers/VMs with minimal filesystem and network access. Use seccomp profiles to restrict syscalls. 6. **Detection**: Alert on unexpected child process creation, outbound network connections, or file writes during model loading operations. Monitor for subprocess, os.system, eval, exec calls in Python processes handling model files. 7. **Model registry controls**: Enforce that only models loaded from internal, verified registries (e.g., MLflow with integrity checks) are used in production.

What systems are affected by CVE-2025-1550?

This vulnerability affects the following AI/ML architecture patterns: training pipelines, model serving, MLOps platforms, model registries, CI/CD pipelines.

What is the CVSS score for CVE-2025-1550?

CVE-2025-1550 has a CVSS v3.1 base score of 9.8 (CRITICAL). The EPSS exploitation probability is 7.97%.

Technical Details

NVD Description

The Keras Model.load_model function permits arbitrary code execution, even with safe_mode=True, through a manually constructed, malicious .keras archive. By altering the config.json file within the archive, an attacker can specify arbitrary Python modules and functions, along with their arguments, to be loaded and executed during model loading.

Exploitation Scenario

An adversary targets an organization using Keras for LLM fine-tuning or inference. They publish a 'fine-tuned LLaMA adapter' on a public model hub, or send a model file via a phishing email to an ML engineer. The .keras archive contains a crafted config.json specifying `__class_name__: subprocess.Popen` with arguments establishing a reverse shell. The engineer loads the model — even explicitly passing `safe_mode=True` — and within seconds the attacker has an interactive shell running as the ML service account. From there they pivot to the training data S3 bucket, exfiltrate model weights (IP theft), or implant a backdoor in production inference services. The entire attack chain requires zero prior access and is triggered by a single, routine ML workflow action.

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H

Timeline

Published
March 11, 2025
Last Modified
July 31, 2025
First Seen
March 11, 2025

Related Vulnerabilities