CVE-2025-49655: keras: Deserialization enables RCE

GHSA-cvhh-q5g5-qprp CRITICAL CISA: ATTEND
Published October 17, 2025
CISO Take

A critical deserialization RCE in Keras 3.11.0–3.11.2 bypasses safe mode entirely — the protection your ML engineers may have been trusting is worthless on affected versions. Any pipeline that loads Keras model files from external or user-supplied sources is fully compromised on impact. Patch to 3.11.3 now and treat any model loading from untrusted sources as an uncontrolled code execution path until verified.

Risk Assessment

CVSS 9.8 with network vector, no privileges, no user interaction — maximum exploitability on paper. The EPSS of 0.00034 indicates no observed in-the-wild exploitation at time of publication, but this will not hold: the attack primitive (malicious model file → RCE) is well-understood and tooling exists. The safe mode bypass elevates severity significantly: organizations that implemented safe mode as a compensating control are fully exposed. AI/ML teams routinely load models from HuggingFace, internal registries, and third-party sources, making the attack surface broad across any organization with an ML practice.

Affected Systems

Package Ecosystem Vulnerable Range Patched
keras pip >= 3.11.0, < 3.11.3 3.11.3
64.1K OpenSSF 7.1 1.5K dependents Pushed yesterday 53% patched ~32d to patch Full package profile →

Do you use keras? You're affected.

Severity & Risk

CVSS 3.1
9.8 / 10
EPSS
0.1%
chance of exploitation in 30 days
Higher than 16% of all CVEs
Exploitation Status
Exploit Available
Exploitation: MEDIUM
Sophistication
Moderate
Exploitation Confidence
medium
CISA SSVC: Public PoC
Composite signal derived from CISA KEV, CISA SSVC, EPSS, trickest/cve, and Nuclei templates.

Attack Surface

AV AC PR UI S C I A
AV Network
AC Low
PR None
UI None
S Unchanged
C High
I High
A High

Recommended Action

1 step
  1. 1) PATCH IMMEDIATELY: upgrade all Keras installations to 3.11.3 — this is the only complete fix. Run pip show keras across your ML infrastructure to identify affected versions. 2) AUDIT model loading: catalog every place your code calls keras.models.load_model() or equivalent and identify the trust level of the source file. 3) DO NOT rely on safe_mode=True as a security control on any Keras version until you've confirmed 3.11.3 is deployed. 4) IMPLEMENT model provenance controls: cryptographic signing and hash verification of model files before loading, even from internal registries. 5) ISOLATE model loading: run model deserialization in sandboxed environments (containers with no network, read-only filesystems, minimal privileges) as a defense-in-depth measure. 6) DETECT: monitor for unexpected process spawning from Python/ML processes, outbound connections from training/inference nodes, and anomalous file access patterns post-model-load. 7) CHECK shared model stores: audit any Keras model files pulled from external sources (HuggingFace, S3, third parties) since Keras 3.11.0 was released (October 2025).

CISA SSVC Assessment

Decision Attend
Exploitation poc
Automatable Yes
Technical Impact total

Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Article 15 - Accuracy, robustness and cybersecurity Article 9 - Risk management system
ISO 42001
A.6.2.6 - AI system components and data provenance A.9.3 - AI system testing and validation
NIST AI RMF
GOVERN-6.2 - Policies and procedures are in place for AI supply chain risk management MANAGE-2.2 - Mechanisms are in place to respond to risks from AI system components
OWASP LLM Top 10
LLM05:2025 - Insecure Plugin Design

Frequently Asked Questions

What is CVE-2025-49655?

A critical deserialization RCE in Keras 3.11.0–3.11.2 bypasses safe mode entirely — the protection your ML engineers may have been trusting is worthless on affected versions. Any pipeline that loads Keras model files from external or user-supplied sources is fully compromised on impact. Patch to 3.11.3 now and treat any model loading from untrusted sources as an uncontrolled code execution path until verified.

Is CVE-2025-49655 actively exploited?

No confirmed active exploitation of CVE-2025-49655 has been reported, but organizations should still patch proactively.

How to fix CVE-2025-49655?

1) PATCH IMMEDIATELY: upgrade all Keras installations to 3.11.3 — this is the only complete fix. Run `pip show keras` across your ML infrastructure to identify affected versions. 2) AUDIT model loading: catalog every place your code calls `keras.models.load_model()` or equivalent and identify the trust level of the source file. 3) DO NOT rely on safe_mode=True as a security control on any Keras version until you've confirmed 3.11.3 is deployed. 4) IMPLEMENT model provenance controls: cryptographic signing and hash verification of model files before loading, even from internal registries. 5) ISOLATE model loading: run model deserialization in sandboxed environments (containers with no network, read-only filesystems, minimal privileges) as a defense-in-depth measure. 6) DETECT: monitor for unexpected process spawning from Python/ML processes, outbound connections from training/inference nodes, and anomalous file access patterns post-model-load. 7) CHECK shared model stores: audit any Keras model files pulled from external sources (HuggingFace, S3, third parties) since Keras 3.11.0 was released (October 2025).

What systems are affected by CVE-2025-49655?

This vulnerability affects the following AI/ML architecture patterns: training pipelines, model serving, model registries, MLOps platforms, collaborative AI development environments.

What is the CVSS score for CVE-2025-49655?

CVE-2025-49655 has a CVSS v3.1 base score of 9.8 (CRITICAL). The EPSS exploitation probability is 0.05%.

Technical Details

NVD Description

Deserialization of untrusted data can occur in versions of the Keras framework running versions 3.11.0 up to but not including 3.11.3, enabling a maliciously uploaded Keras file containing a TorchModuleWrapper class to run arbitrary code on an end user’s system when loaded despite safe mode being enabled. The vulnerability can be triggered through both local and remote files.

Exploitation Scenario

Adversary identifies a target organization using Keras for model serving or fine-tuning workflows. They craft a malicious .keras model file embedding executable Python code within a serialized TorchModuleWrapper class payload. The file is uploaded to a shared model registry (internal or public like HuggingFace), submitted as a 'fine-tuned' model via a partner API, or delivered through a compromised ML data pipeline. When an ML engineer or automated serving system calls `keras.models.load_model('malicious.keras', safe=True)` on an affected version, the deserialization triggers arbitrary code execution — establishing persistence, exfiltrating training data and credentials, or pivoting to adjacent GPU/compute infrastructure. The safe_mode=True call gives false confidence and introduces no actual barrier.

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H

Timeline

Published
October 17, 2025
Last Modified
October 21, 2025
First Seen
October 17, 2025

Related Vulnerabilities