A critical deserialization RCE in Keras 3.11.0–3.11.2 bypasses safe mode entirely — the protection your ML engineers may have been trusting is worthless on affected versions. Any pipeline that loads Keras model files from external or user-supplied sources is fully compromised on impact. Patch to 3.11.3 now and treat any model loading from untrusted sources as an uncontrolled code execution path until verified.
Risk Assessment
CVSS 9.8 with network vector, no privileges, no user interaction — maximum exploitability on paper. The EPSS of 0.00034 indicates no observed in-the-wild exploitation at time of publication, but this will not hold: the attack primitive (malicious model file → RCE) is well-understood and tooling exists. The safe mode bypass elevates severity significantly: organizations that implemented safe mode as a compensating control are fully exposed. AI/ML teams routinely load models from HuggingFace, internal registries, and third-party sources, making the attack surface broad across any organization with an ML practice.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| keras | pip | >= 3.11.0, < 3.11.3 | 3.11.3 |
Do you use keras? You're affected.
Severity & Risk
Attack Surface
Recommended Action
1 step-
1) PATCH IMMEDIATELY: upgrade all Keras installations to 3.11.3 — this is the only complete fix. Run
pip show kerasacross your ML infrastructure to identify affected versions. 2) AUDIT model loading: catalog every place your code callskeras.models.load_model()or equivalent and identify the trust level of the source file. 3) DO NOT rely on safe_mode=True as a security control on any Keras version until you've confirmed 3.11.3 is deployed. 4) IMPLEMENT model provenance controls: cryptographic signing and hash verification of model files before loading, even from internal registries. 5) ISOLATE model loading: run model deserialization in sandboxed environments (containers with no network, read-only filesystems, minimal privileges) as a defense-in-depth measure. 6) DETECT: monitor for unexpected process spawning from Python/ML processes, outbound connections from training/inference nodes, and anomalous file access patterns post-model-load. 7) CHECK shared model stores: audit any Keras model files pulled from external sources (HuggingFace, S3, third parties) since Keras 3.11.0 was released (October 2025).
CISA SSVC Assessment
Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2025-49655?
A critical deserialization RCE in Keras 3.11.0–3.11.2 bypasses safe mode entirely — the protection your ML engineers may have been trusting is worthless on affected versions. Any pipeline that loads Keras model files from external or user-supplied sources is fully compromised on impact. Patch to 3.11.3 now and treat any model loading from untrusted sources as an uncontrolled code execution path until verified.
Is CVE-2025-49655 actively exploited?
No confirmed active exploitation of CVE-2025-49655 has been reported, but organizations should still patch proactively.
How to fix CVE-2025-49655?
1) PATCH IMMEDIATELY: upgrade all Keras installations to 3.11.3 — this is the only complete fix. Run `pip show keras` across your ML infrastructure to identify affected versions. 2) AUDIT model loading: catalog every place your code calls `keras.models.load_model()` or equivalent and identify the trust level of the source file. 3) DO NOT rely on safe_mode=True as a security control on any Keras version until you've confirmed 3.11.3 is deployed. 4) IMPLEMENT model provenance controls: cryptographic signing and hash verification of model files before loading, even from internal registries. 5) ISOLATE model loading: run model deserialization in sandboxed environments (containers with no network, read-only filesystems, minimal privileges) as a defense-in-depth measure. 6) DETECT: monitor for unexpected process spawning from Python/ML processes, outbound connections from training/inference nodes, and anomalous file access patterns post-model-load. 7) CHECK shared model stores: audit any Keras model files pulled from external sources (HuggingFace, S3, third parties) since Keras 3.11.0 was released (October 2025).
What systems are affected by CVE-2025-49655?
This vulnerability affects the following AI/ML architecture patterns: training pipelines, model serving, model registries, MLOps platforms, collaborative AI development environments.
What is the CVSS score for CVE-2025-49655?
CVE-2025-49655 has a CVSS v3.1 base score of 9.8 (CRITICAL). The EPSS exploitation probability is 0.05%.
Technical Details
NVD Description
Deserialization of untrusted data can occur in versions of the Keras framework running versions 3.11.0 up to but not including 3.11.3, enabling a maliciously uploaded Keras file containing a TorchModuleWrapper class to run arbitrary code on an end user’s system when loaded despite safe mode being enabled. The vulnerability can be triggered through both local and remote files.
Exploitation Scenario
Adversary identifies a target organization using Keras for model serving or fine-tuning workflows. They craft a malicious .keras model file embedding executable Python code within a serialized TorchModuleWrapper class payload. The file is uploaded to a shared model registry (internal or public like HuggingFace), submitted as a 'fine-tuned' model via a partner API, or delivered through a compromised ML data pipeline. When an ML engineer or automated serving system calls `keras.models.load_model('malicious.keras', safe=True)` on an affected version, the deserialization triggers arbitrary code execution — establishing persistence, exfiltrating training data and credentials, or pivoting to adjacent GPU/compute infrastructure. The safe_mode=True call gives false confidence and introduces no actual barrier.
Weaknesses (CWE)
CVSS Vector
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H References
Timeline
Related Vulnerabilities
CVE-2025-12060 9.8 keras: Path Traversal enables file access
Same package: keras CVE-2025-1550 9.8 Keras: safe_mode bypass enables RCE via model loading
Same package: keras CVE-2024-3660 9.8 Keras: RCE via malicious model deserialization
Same package: keras CVE-2024-49326 9.8 Affiliator WP Plugin: Unauthenticated Web Shell Upload
Same package: keras CVE-2026-1462 8.8 Keras: safe_mode bypass allows RCE via model deserialization
Same package: keras
AI Threat Alert