CVE-2025-8747: Keras: safe mode bypass enables RCE via model load

GHSA-c9rc-mg46-23w3 HIGH PoC AVAILABLE CISA: ATTEND
Published August 11, 2025
CISO Take

Any system loading untrusted .keras model files is exposed to full code execution — this includes MLOps pipelines, model serving infrastructure, and data science workstations. Upgrade Keras to 3.11.0 immediately and audit every location where Model.load_model() is called against externally sourced files. Treat .keras model archives as executable code, not passive data.

Risk Assessment

High risk for organizations with ML pipelines that load Keras models from external sources, public repositories, shared registries, or unverified storage. The local attack vector reduces opportunistic exploitation but the supply chain vector — poisoned model in a public registry or code repo — is highly realistic for enterprise ML teams. EPSS is extremely low (0.00009) with no active exploitation observed, but the impact when triggered is complete compromise of the loading host. Wide Keras adoption across the industry amplifies aggregate exposure significantly.

Affected Systems

Package Ecosystem Vulnerable Range Patched
keras pip No patch
64.1K OpenSSF 7.0 1.5K dependents Pushed 7d ago 53% patched ~32d to patch Full package profile →
keras pip >= 3.0.0, < 3.11.0 3.11.0
64.1K OpenSSF 7.0 1.5K dependents Pushed 7d ago 53% patched ~32d to patch Full package profile →

Severity & Risk

CVSS 3.1
7.8 / 10
EPSS
0.0%
chance of exploitation in 30 days
Higher than 1% of all CVEs
Exploitation Status
Exploit Available
Exploitation: MEDIUM
Sophistication
Moderate
Exploitation Confidence
medium
CISA SSVC: Public PoC
Public PoC indexed (trickest/cve)
Composite signal derived from CISA KEV, CISA SSVC, EPSS, trickest/cve, and Nuclei templates.

Attack Surface

AV AC PR UI S C I A
AV Local
AC Low
PR None
UI Required
S Unchanged
C High
I High
A High

Recommended Action

1 step
  1. 1) Upgrade Keras to 3.11.0 immediately — patch is available via pip. 2) Until patched, audit all code invoking Model.load_model() and restrict it to models from verified, internal sources only. 3) Implement model provenance controls: sign and verify model artifacts before loading; reject unsigned models in automated pipelines. 4) Apply least-privilege and network segmentation to ML training and serving infrastructure to contain blast radius of any code execution. 5) Scan .keras files with EDR/antivirus before loading in pipeline environments. 6) For detection: monitor for unexpected process spawns from Python/ML worker processes, unusual network connections from model loading jobs, and anomalous file writes in ML pipeline contexts. 7) Enforce allowlisting of model sources in MLOps workflows.

CISA SSVC Assessment

Decision Attend
Exploitation poc
Automatable No
Technical Impact total

Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Art.15 - Accuracy, robustness and cybersecurity Art.17 - Quality management system
ISO 42001
A.10 - Third-party and supplier relationships A.8 - AI risk management
NIST AI RMF
GOVERN 6.1 - Policies and procedures for AI supply chain risk MANAGE 2.4 - Residual risk treatment and response
OWASP LLM Top 10
LLM03:2025 - Supply Chain Vulnerabilities

Frequently Asked Questions

What is CVE-2025-8747?

Any system loading untrusted .keras model files is exposed to full code execution — this includes MLOps pipelines, model serving infrastructure, and data science workstations. Upgrade Keras to 3.11.0 immediately and audit every location where Model.load_model() is called against externally sourced files. Treat .keras model archives as executable code, not passive data.

Is CVE-2025-8747 actively exploited?

Proof-of-concept exploit code is publicly available for CVE-2025-8747, increasing the risk of exploitation.

How to fix CVE-2025-8747?

1) Upgrade Keras to 3.11.0 immediately — patch is available via pip. 2) Until patched, audit all code invoking Model.load_model() and restrict it to models from verified, internal sources only. 3) Implement model provenance controls: sign and verify model artifacts before loading; reject unsigned models in automated pipelines. 4) Apply least-privilege and network segmentation to ML training and serving infrastructure to contain blast radius of any code execution. 5) Scan .keras files with EDR/antivirus before loading in pipeline environments. 6) For detection: monitor for unexpected process spawns from Python/ML worker processes, unusual network connections from model loading jobs, and anomalous file writes in ML pipeline contexts. 7) Enforce allowlisting of model sources in MLOps workflows.

What systems are affected by CVE-2025-8747?

This vulnerability affects the following AI/ML architecture patterns: training pipelines, model serving, MLOps pipelines, model registries, data science workstations.

What is the CVSS score for CVE-2025-8747?

CVE-2025-8747 has a CVSS v3.1 base score of 7.8 (HIGH). The EPSS exploitation probability is 0.01%.

Technical Details

NVD Description

A safe mode bypass vulnerability in the `Model.load_model` method in Keras versions 3.0.0 through 3.10.0 allows an attacker to achieve arbitrary code execution by convincing a user to load a specially crafted `.keras` model archive.

Exploitation Scenario

An attacker crafts a malicious .keras model archive containing serialized Python objects that execute arbitrary code upon deserialization, bypassing the safe_mode protection in Model.load_model(). The attacker publishes this as a purportedly fine-tuned or benchmark-optimized model variant on a public platform such as Hugging Face, Kaggle, or GitHub. An ML engineer evaluating community models, or an automated MLOps pipeline configured to pull updated models from a registry, calls Model.load_model() on the file. Code execution fires on the engineer's workstation or the pipeline server, granting the attacker a foothold in ML infrastructure. From there, they can exfiltrate API keys and cloud credentials, steal proprietary model weights or training data, poison downstream model artifacts, or pivot deeper into the corporate network.

CVSS Vector

CVSS:3.1/AV:L/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H

Timeline

Published
August 11, 2025
Last Modified
August 14, 2025
First Seen
August 11, 2025

Related Vulnerabilities