CVE-2024-3660: Keras: RCE via malicious model deserialization
CRITICAL PoC AVAILABLE CISA: TRACK*Any system loading Keras/TensorFlow models from external or user-supplied sources is exposed to full remote code execution — no authentication or interaction required. Patch immediately to Keras 2.13+, and audit every pipeline endpoint that accepts or loads model files. Until patched, treat model loading from untrusted sources as equivalent to running arbitrary user code on your infrastructure.
Risk Assessment
Severity is maximal: CVSS 9.8, network-reachable, zero authentication, zero user interaction required. Keras is embedded in virtually every TensorFlow-based ML stack, making blast radius enormous. The attack requires only delivering a malicious model file — a capability well within reach of commodity threat actors. AI/ML systems are disproportionately exposed because model ingestion from external registries, user uploads, and transfer learning workflows is a standard operational pattern, not an edge case.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| keras | pip | — | No patch |
Do you use keras? You're affected.
Severity & Risk
Attack Surface
Recommended Action
7 steps-
Upgrade Keras to 2.13 or later immediately — this is the only complete fix.
-
Inventory all systems loading Keras models and prioritize those accepting external input.
-
Enforce model provenance: only load models from internal, hash-verified artifact stores.
-
Never load models from user-supplied paths or untrusted registries without sandboxing.
-
Run model loading processes in isolated environments (containers with no-network, read-only filesystems, minimal IAM permissions).
-
For detection: monitor for unexpected outbound connections or process spawning from ML service processes; scan model files with tools like ModelScan before loading.
-
Treat .h5 and SavedModel files as executables — apply the same controls as code artifacts.
CISA SSVC Assessment
Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2024-3660?
Any system loading Keras/TensorFlow models from external or user-supplied sources is exposed to full remote code execution — no authentication or interaction required. Patch immediately to Keras 2.13+, and audit every pipeline endpoint that accepts or loads model files. Until patched, treat model loading from untrusted sources as equivalent to running arbitrary user code on your infrastructure.
Is CVE-2024-3660 actively exploited?
Proof-of-concept exploit code is publicly available for CVE-2024-3660, increasing the risk of exploitation.
How to fix CVE-2024-3660?
1. Upgrade Keras to 2.13 or later immediately — this is the only complete fix. 2. Inventory all systems loading Keras models and prioritize those accepting external input. 3. Enforce model provenance: only load models from internal, hash-verified artifact stores. 4. Never load models from user-supplied paths or untrusted registries without sandboxing. 5. Run model loading processes in isolated environments (containers with no-network, read-only filesystems, minimal IAM permissions). 6. For detection: monitor for unexpected outbound connections or process spawning from ML service processes; scan model files with tools like ModelScan before loading. 7. Treat .h5 and SavedModel files as executables — apply the same controls as code artifacts.
What systems are affected by CVE-2024-3660?
This vulnerability affects the following AI/ML architecture patterns: Training pipelines, Model serving, MLOps platforms, Transfer learning workflows, Model registries, AI development environments.
What is the CVSS score for CVE-2024-3660?
CVE-2024-3660 has a CVSS v3.1 base score of 9.8 (CRITICAL). The EPSS exploitation probability is 0.37%.
Technical Details
NVD Description
A arbitrary code injection vulnerability in TensorFlow's Keras framework (<2.13) allows attackers to execute arbitrary code with the same permissions as the application using a model that allow arbitrary code irrespective of the application.
Exploitation Scenario
An adversary crafts a malicious Keras model file embedding arbitrary Python code via the Lambda layer or custom object deserialization hooks. The file is uploaded to an MLOps platform (e.g., an internal model registry), submitted via a 'model fine-tuning' API endpoint, or published to a public model hub and referenced in an automated transfer learning pipeline. When the target system calls keras.models.load_model() on the file, the embedded payload executes with the ML service's privileges — establishing a reverse shell, exfiltrating environment variables and API keys, or pivoting to internal services. The attack requires no interaction beyond delivering the model file and works against any unpatched Keras deployment that loads external models.
Weaknesses (CWE)
CVSS Vector
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H References
- kb.cert.org/vuls/id/253266 3rd Party VDB
- kb.cert.org/vuls/id/253266 3rd Party VDB
- github.com/AndyVillegas/tensorflow-hdf5-rce-poc Exploit
- github.com/PuddinCat/GithubRepoSpider Exploit
- github.com/ShenaoW/awesome-llm-supply-chain-security Exploit
- github.com/aaryanbhujang/CVE-2024-3660-PoC Exploit
- github.com/nomi-sec/PoC-in-GitHub Exploit
- github.com/zulloper/cve-poc Exploit
Timeline
Related Vulnerabilities
CVE-2025-12060 9.8 keras: Path Traversal enables file access
Same package: keras CVE-2024-49326 9.8 Affiliator WP Plugin: Unauthenticated Web Shell Upload
Same package: keras CVE-2025-49655 9.8 keras: Deserialization enables RCE
Same package: keras CVE-2025-1550 9.8 Keras: safe_mode bypass enables RCE via model loading
Same package: keras CVE-2026-1462 8.8 Keras: safe_mode bypass allows RCE via model deserialization
Same package: keras
AI Threat Alert