CVE-2026-0897: keras: Resource Exhaustion enables DoS

GHSA-mgx6-5cf9-rr43 HIGH
Published January 15, 2026
CISO Take

If your ML pipelines or model serving infrastructure load .keras files from external, user-controlled, or shared repository sources, patch Keras to 3.12.1 now. A crafted weight archive can crash the Python interpreter with zero authentication required, taking down inference workers or training jobs. If immediate patching is blocked, enforce strict allowlisting of model sources and apply container memory limits to bound blast radius.

Risk Assessment

MEDIUM-HIGH for organizations running public model evaluation endpoints, automated fine-tuning pipelines accepting external checkpoints, or transfer learning workflows pulling from shared repositories. LOW for air-gapped environments loading only internally-signed, origin-verified weights. EPSS of 0.00029 indicates minimal active exploitation today, but the attack primitive is trivially reproducible by any attacker who can deliver a crafted file to a pipeline — no ML expertise required beyond understanding HDF5 metadata structure.

Affected Systems

Package Ecosystem Vulnerable Range Patched
keras pip >= 3.0.0, <= 3.12.0 3.12.1
64.1K OpenSSF 7.0 1.5K dependents Pushed 7d ago 53% patched ~32d to patch Full package profile →

Do you use keras? You're affected.

Severity & Risk

CVSS 3.1
N/A
EPSS
0.0%
chance of exploitation in 30 days
Higher than 9% of all CVEs
Exploitation Status
No known exploitation
Sophistication
Trivial

Recommended Action

1 step
  1. 1) Patch immediately: pip install 'keras>=3.12.1'. Verify with pip show keras. 2) If patching is blocked, restrict model loading to cryptographically verified, internally-hosted artifacts only — reject any externally-sourced .keras or .h5 files at the pipeline boundary. 3) Apply container/cgroup memory limits on all ML serving and training pods to prevent host-level memory exhaustion from a single crashing process. 4) Add file size validation and HDF5 shape metadata inspection before invoking keras.saving.load_model() or equivalent. 5) Detection: alert on abnormal RSS/VSZ spikes in Python ML processes, or sudden OOMKilled pod events correlated with model load operations. 6) Audit all publicly-accessible endpoints that trigger weight loading — model fine-tuning APIs, evaluation services, and any user-facing upload flows.

CISA SSVC Assessment

Decision Track
Exploitation none
Automatable No
Technical Impact partial

Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Article 15 - Accuracy, robustness and cybersecurity
ISO 42001
A.6.1.5 - AI system security A.6.2.6 - AI system security
NIST AI RMF
MANAGE 2.4 - Residual risks to AI are managed and monitored MS-2.5 - Manage AI risks on an ongoing basis
OWASP LLM Top 10
LLM04 - Model Denial of Service LLM04:2025 - Model Denial of Service

Frequently Asked Questions

What is CVE-2026-0897?

If your ML pipelines or model serving infrastructure load .keras files from external, user-controlled, or shared repository sources, patch Keras to 3.12.1 now. A crafted weight archive can crash the Python interpreter with zero authentication required, taking down inference workers or training jobs. If immediate patching is blocked, enforce strict allowlisting of model sources and apply container memory limits to bound blast radius.

Is CVE-2026-0897 actively exploited?

No confirmed active exploitation of CVE-2026-0897 has been reported, but organizations should still patch proactively.

How to fix CVE-2026-0897?

1) Patch immediately: pip install 'keras>=3.12.1'. Verify with pip show keras. 2) If patching is blocked, restrict model loading to cryptographically verified, internally-hosted artifacts only — reject any externally-sourced .keras or .h5 files at the pipeline boundary. 3) Apply container/cgroup memory limits on all ML serving and training pods to prevent host-level memory exhaustion from a single crashing process. 4) Add file size validation and HDF5 shape metadata inspection before invoking keras.saving.load_model() or equivalent. 5) Detection: alert on abnormal RSS/VSZ spikes in Python ML processes, or sudden OOMKilled pod events correlated with model load operations. 6) Audit all publicly-accessible endpoints that trigger weight loading — model fine-tuning APIs, evaluation services, and any user-facing upload flows.

What systems are affected by CVE-2026-0897?

This vulnerability affects the following AI/ML architecture patterns: training pipelines, model serving, transfer learning workflows, MLOps/CI-CD pipelines, model evaluation platforms.

What is the CVSS score for CVE-2026-0897?

No CVSS score has been assigned yet.

Technical Details

NVD Description

Allocation of Resources Without Limits or Throttling in the HDF5 weight loading component in Google Keras 3.0.0 through 3.12.0 and 3.13.0 on all platforms allows a remote attacker to cause a Denial of Service (DoS) through memory exhaustion and a crash of the Python interpreter via a crafted .keras archive containing a valid model.weights.h5 file whose dataset declares an extremely large shape.

Exploitation Scenario

An adversary targeting an organization running an automated model evaluation platform or transfer learning pipeline crafts a valid .keras archive. The archive contains a well-formed model.weights.h5 with a legitimate HDF5 header but declares a dataset shape with extreme dimensions (e.g., [2147483647, 2147483647]). When the pipeline invokes keras.saving.load_model() or loads the weights file, the HDF5 component attempts to pre-allocate memory proportional to the declared shape before reading actual data. System RAM is exhausted within seconds, the Python interpreter crashes via an unhandled MemoryError, and the inference worker or training job is terminated. In containerized ML workloads without memory limits, this can cascade to OOMKill events affecting co-located services. No authentication, no ML expertise, and no network-level access beyond file delivery to the target pipeline are required.

Timeline

Published
January 15, 2026
Last Modified
May 6, 2026
First Seen
March 24, 2026

Related Vulnerabilities