A critical deserialization RCE in Keras 3.11.0–3.11.2 bypasses safe mode entirely — the protection your ML engineers may have been trusting is worthless on affected versions. Any pipeline that loads Keras model files from external or user-supplied sources is fully compromised on impact. Patch to 3.11.3 now and treat any model loading from untrusted sources as an uncontrolled code execution path until verified.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| keras | pip | >= 3.11.0, < 3.11.3 | 3.11.3 |
Do you use keras? You're affected.
Severity & Risk
Recommended Action
- 1) PATCH IMMEDIATELY: upgrade all Keras installations to 3.11.3 — this is the only complete fix. Run `pip show keras` across your ML infrastructure to identify affected versions. 2) AUDIT model loading: catalog every place your code calls `keras.models.load_model()` or equivalent and identify the trust level of the source file. 3) DO NOT rely on safe_mode=True as a security control on any Keras version until you've confirmed 3.11.3 is deployed. 4) IMPLEMENT model provenance controls: cryptographic signing and hash verification of model files before loading, even from internal registries. 5) ISOLATE model loading: run model deserialization in sandboxed environments (containers with no network, read-only filesystems, minimal privileges) as a defense-in-depth measure. 6) DETECT: monitor for unexpected process spawning from Python/ML processes, outbound connections from training/inference nodes, and anomalous file access patterns post-model-load. 7) CHECK shared model stores: audit any Keras model files pulled from external sources (HuggingFace, S3, third parties) since Keras 3.11.0 was released (October 2025).
Classification
Compliance Impact
This CVE is relevant to:
Technical Details
NVD Description
Deserialization of untrusted data can occur in versions of the Keras framework running versions 3.11.0 up to but not including 3.11.3, enabling a maliciously uploaded Keras file containing a TorchModuleWrapper class to run arbitrary code on an end user’s system when loaded despite safe mode being enabled. The vulnerability can be triggered through both local and remote files.
Exploitation Scenario
Adversary identifies a target organization using Keras for model serving or fine-tuning workflows. They craft a malicious .keras model file embedding executable Python code within a serialized TorchModuleWrapper class payload. The file is uploaded to a shared model registry (internal or public like HuggingFace), submitted as a 'fine-tuned' model via a partner API, or delivered through a compromised ML data pipeline. When an ML engineer or automated serving system calls `keras.models.load_model('malicious.keras', safe=True)` on an affected version, the deserialization triggers arbitrary code execution — establishing persistence, exfiltrating training data and credentials, or pivoting to adjacent GPU/compute infrastructure. The safe_mode=True call gives false confidence and introduces no actual barrier.
Weaknesses (CWE)
CVSS Vector
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H