CVE-2024-3660: Keras: RCE via malicious model deserialization

CRITICAL PoC AVAILABLE CISA: TRACK*
Published April 16, 2024
CISO Take

Any system loading Keras/TensorFlow models from external or user-supplied sources is exposed to full remote code execution — no authentication or interaction required. Patch immediately to Keras 2.13+, and audit every pipeline endpoint that accepts or loads model files. Until patched, treat model loading from untrusted sources as equivalent to running arbitrary user code on your infrastructure.

Risk Assessment

Severity is maximal: CVSS 9.8, network-reachable, zero authentication, zero user interaction required. Keras is embedded in virtually every TensorFlow-based ML stack, making blast radius enormous. The attack requires only delivering a malicious model file — a capability well within reach of commodity threat actors. AI/ML systems are disproportionately exposed because model ingestion from external registries, user uploads, and transfer learning workflows is a standard operational pattern, not an edge case.

Affected Systems

Package Ecosystem Vulnerable Range Patched
keras pip No patch
64.1K OpenSSF 7.0 1.5K dependents Pushed 8d ago 53% patched ~32d to patch Full package profile →

Do you use keras? You're affected.

Severity & Risk

CVSS 3.1
9.8 / 10
EPSS
0.4%
chance of exploitation in 30 days
Higher than 59% of all CVEs
Exploitation Status
Exploit Available
Exploitation: MEDIUM
Sophistication
Trivial
Exploitation Confidence
medium
Public PoC indexed (trickest/cve)
Composite signal derived from CISA KEV, CISA SSVC, EPSS, trickest/cve, and Nuclei templates.

Attack Surface

AV AC PR UI S C I A
AV Network
AC Low
PR None
UI None
S Unchanged
C High
I High
A High

Recommended Action

7 steps
  1. Upgrade Keras to 2.13 or later immediately — this is the only complete fix.

  2. Inventory all systems loading Keras models and prioritize those accepting external input.

  3. Enforce model provenance: only load models from internal, hash-verified artifact stores.

  4. Never load models from user-supplied paths or untrusted registries without sandboxing.

  5. Run model loading processes in isolated environments (containers with no-network, read-only filesystems, minimal IAM permissions).

  6. For detection: monitor for unexpected outbound connections or process spawning from ML service processes; scan model files with tools like ModelScan before loading.

  7. Treat .h5 and SavedModel files as executables — apply the same controls as code artifacts.

CISA SSVC Assessment

Decision Track*
Exploitation none
Automatable Yes
Technical Impact total

Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Article 15 - Accuracy, robustness, and cybersecurity Article 9 - Risk management system
ISO 42001
A.6.1.5 - AI supply chain A.9.4 - AI system security
NIST AI RMF
GOVERN 6.2 - Organizational teams are committed to policies, processes, and procedures that address risks from third-party entities MANAGE 2.2 - Mechanisms are in place to sustain the value of deployed AI systems
OWASP LLM Top 10
LLM05:2025 - Supply Chain Vulnerabilities

Frequently Asked Questions

What is CVE-2024-3660?

Any system loading Keras/TensorFlow models from external or user-supplied sources is exposed to full remote code execution — no authentication or interaction required. Patch immediately to Keras 2.13+, and audit every pipeline endpoint that accepts or loads model files. Until patched, treat model loading from untrusted sources as equivalent to running arbitrary user code on your infrastructure.

Is CVE-2024-3660 actively exploited?

Proof-of-concept exploit code is publicly available for CVE-2024-3660, increasing the risk of exploitation.

How to fix CVE-2024-3660?

1. Upgrade Keras to 2.13 or later immediately — this is the only complete fix. 2. Inventory all systems loading Keras models and prioritize those accepting external input. 3. Enforce model provenance: only load models from internal, hash-verified artifact stores. 4. Never load models from user-supplied paths or untrusted registries without sandboxing. 5. Run model loading processes in isolated environments (containers with no-network, read-only filesystems, minimal IAM permissions). 6. For detection: monitor for unexpected outbound connections or process spawning from ML service processes; scan model files with tools like ModelScan before loading. 7. Treat .h5 and SavedModel files as executables — apply the same controls as code artifacts.

What systems are affected by CVE-2024-3660?

This vulnerability affects the following AI/ML architecture patterns: Training pipelines, Model serving, MLOps platforms, Transfer learning workflows, Model registries, AI development environments.

What is the CVSS score for CVE-2024-3660?

CVE-2024-3660 has a CVSS v3.1 base score of 9.8 (CRITICAL). The EPSS exploitation probability is 0.37%.

Technical Details

NVD Description

A arbitrary code injection vulnerability in TensorFlow's Keras framework (<2.13) allows attackers to execute arbitrary code with the same permissions as the application using a model that allow arbitrary code irrespective of the application.

Exploitation Scenario

An adversary crafts a malicious Keras model file embedding arbitrary Python code via the Lambda layer or custom object deserialization hooks. The file is uploaded to an MLOps platform (e.g., an internal model registry), submitted via a 'model fine-tuning' API endpoint, or published to a public model hub and referenced in an automated transfer learning pipeline. When the target system calls keras.models.load_model() on the file, the embedded payload executes with the ML service's privileges — establishing a reverse shell, exfiltrating environment variables and API keys, or pivoting to internal services. The attack requires no interaction beyond delivering the model file and works against any unpatched Keras deployment that loads external models.

Weaknesses (CWE)

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H

Timeline

Published
April 16, 2024
Last Modified
September 23, 2025
First Seen
April 16, 2024

Related Vulnerabilities