CVE-2025-8747: Keras: safe mode bypass enables RCE via model load
GHSA-c9rc-mg46-23w3 HIGH PoC AVAILABLE CISA: ATTENDAny system loading untrusted .keras model files is exposed to full code execution — this includes MLOps pipelines, model serving infrastructure, and data science workstations. Upgrade Keras to 3.11.0 immediately and audit every location where Model.load_model() is called against externally sourced files. Treat .keras model archives as executable code, not passive data.
Risk Assessment
High risk for organizations with ML pipelines that load Keras models from external sources, public repositories, shared registries, or unverified storage. The local attack vector reduces opportunistic exploitation but the supply chain vector — poisoned model in a public registry or code repo — is highly realistic for enterprise ML teams. EPSS is extremely low (0.00009) with no active exploitation observed, but the impact when triggered is complete compromise of the loading host. Wide Keras adoption across the industry amplifies aggregate exposure significantly.
Affected Systems
Severity & Risk
Attack Surface
Recommended Action
1 step-
1) Upgrade Keras to 3.11.0 immediately — patch is available via pip. 2) Until patched, audit all code invoking Model.load_model() and restrict it to models from verified, internal sources only. 3) Implement model provenance controls: sign and verify model artifacts before loading; reject unsigned models in automated pipelines. 4) Apply least-privilege and network segmentation to ML training and serving infrastructure to contain blast radius of any code execution. 5) Scan .keras files with EDR/antivirus before loading in pipeline environments. 6) For detection: monitor for unexpected process spawns from Python/ML worker processes, unusual network connections from model loading jobs, and anomalous file writes in ML pipeline contexts. 7) Enforce allowlisting of model sources in MLOps workflows.
CISA SSVC Assessment
Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2025-8747?
Any system loading untrusted .keras model files is exposed to full code execution — this includes MLOps pipelines, model serving infrastructure, and data science workstations. Upgrade Keras to 3.11.0 immediately and audit every location where Model.load_model() is called against externally sourced files. Treat .keras model archives as executable code, not passive data.
Is CVE-2025-8747 actively exploited?
Proof-of-concept exploit code is publicly available for CVE-2025-8747, increasing the risk of exploitation.
How to fix CVE-2025-8747?
1) Upgrade Keras to 3.11.0 immediately — patch is available via pip. 2) Until patched, audit all code invoking Model.load_model() and restrict it to models from verified, internal sources only. 3) Implement model provenance controls: sign and verify model artifacts before loading; reject unsigned models in automated pipelines. 4) Apply least-privilege and network segmentation to ML training and serving infrastructure to contain blast radius of any code execution. 5) Scan .keras files with EDR/antivirus before loading in pipeline environments. 6) For detection: monitor for unexpected process spawns from Python/ML worker processes, unusual network connections from model loading jobs, and anomalous file writes in ML pipeline contexts. 7) Enforce allowlisting of model sources in MLOps workflows.
What systems are affected by CVE-2025-8747?
This vulnerability affects the following AI/ML architecture patterns: training pipelines, model serving, MLOps pipelines, model registries, data science workstations.
What is the CVSS score for CVE-2025-8747?
CVE-2025-8747 has a CVSS v3.1 base score of 7.8 (HIGH). The EPSS exploitation probability is 0.01%.
Technical Details
NVD Description
A safe mode bypass vulnerability in the `Model.load_model` method in Keras versions 3.0.0 through 3.10.0 allows an attacker to achieve arbitrary code execution by convincing a user to load a specially crafted `.keras` model archive.
Exploitation Scenario
An attacker crafts a malicious .keras model archive containing serialized Python objects that execute arbitrary code upon deserialization, bypassing the safe_mode protection in Model.load_model(). The attacker publishes this as a purportedly fine-tuned or benchmark-optimized model variant on a public platform such as Hugging Face, Kaggle, or GitHub. An ML engineer evaluating community models, or an automated MLOps pipeline configured to pull updated models from a registry, calls Model.load_model() on the file. Code execution fires on the engineer's workstation or the pipeline server, granting the attacker a foothold in ML infrastructure. From there, they can exfiltrate API keys and cloud credentials, steal proprietary model weights or training data, poison downstream model artifacts, or pivot deeper into the corporate network.
Weaknesses (CWE)
CVSS Vector
CVSS:3.1/AV:L/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H References
- github.com/keras-team/keras/pull/21429 Issue
- jfrog.com/blog/keras-safe_mode-bypass-vulnerability/ 3rd Party
- github.com/advisories/GHSA-c9rc-mg46-23w3
- github.com/keras-team/keras/commit/713172ab56b864e59e2aa79b1a51b0e728bba858
- github.com/keras-team/keras/security/advisories/GHSA-c9rc-mg46-23w3
- jfrog.com/blog/keras-safe_mode-bypass-vulnerability
- nvd.nist.gov/vuln/detail/CVE-2025-8747
- github.com/fkie-cad/nvd-json-data-feeds Exploit
Timeline
Related Vulnerabilities
CVE-2025-49655 9.8 keras: Deserialization enables RCE
Same package: keras CVE-2025-1550 9.8 Keras: safe_mode bypass enables RCE via model loading
Same package: keras CVE-2024-3660 9.8 Keras: RCE via malicious model deserialization
Same package: keras CVE-2024-49326 9.8 Affiliator WP Plugin: Unauthenticated Web Shell Upload
Same package: keras CVE-2025-12060 9.8 keras: Path Traversal enables file access
Same package: keras
AI Threat Alert