If your ML pipelines or model serving infrastructure load .keras files from external, user-controlled, or shared repository sources, patch Keras to 3.12.1 now. A crafted weight archive can crash the Python interpreter with zero authentication required, taking down inference workers or training jobs. If immediate patching is blocked, enforce strict allowlisting of model sources and apply container memory limits to bound blast radius.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| keras | pip | >= 3.0.0, < 3.12.1 | 3.12.1 |
Do you use keras? You're affected.
Severity & Risk
Recommended Action
- 1) Patch immediately: pip install 'keras>=3.12.1'. Verify with pip show keras. 2) If patching is blocked, restrict model loading to cryptographically verified, internally-hosted artifacts only — reject any externally-sourced .keras or .h5 files at the pipeline boundary. 3) Apply container/cgroup memory limits on all ML serving and training pods to prevent host-level memory exhaustion from a single crashing process. 4) Add file size validation and HDF5 shape metadata inspection before invoking keras.saving.load_model() or equivalent. 5) Detection: alert on abnormal RSS/VSZ spikes in Python ML processes, or sudden OOMKilled pod events correlated with model load operations. 6) Audit all publicly-accessible endpoints that trigger weight loading — model fine-tuning APIs, evaluation services, and any user-facing upload flows.
Classification
Compliance Impact
This CVE is relevant to:
Technical Details
NVD Description
Allocation of Resources Without Limits or Throttling in the HDF5 weight loading component in Google Keras 3.0.0 through 3.12.0 and 3.13.0 on all platforms allows a remote attacker to cause a Denial of Service (DoS) through memory exhaustion and a crash of the Python interpreter via a crafted .keras archive containing a valid model.weights.h5 file whose dataset declares an extremely large shape.
Exploitation Scenario
An adversary targeting an organization running an automated model evaluation platform or transfer learning pipeline crafts a valid .keras archive. The archive contains a well-formed model.weights.h5 with a legitimate HDF5 header but declares a dataset shape with extreme dimensions (e.g., [2147483647, 2147483647]). When the pipeline invokes keras.saving.load_model() or loads the weights file, the HDF5 component attempts to pre-allocate memory proportional to the declared shape before reading actual data. System RAM is exhausted within seconds, the Python interpreter crashes via an unhandled MemoryError, and the inference worker or training job is terminated. In containerized ML workloads without memory limits, this can cascade to OOMKill events affecting co-located services. No authentication, no ML expertise, and no network-level access beyond file delivery to the target pipeline are required.
Weaknesses (CWE)
References
- github.com/advisories/GHSA-xfhx-r7ww-5995
- github.com/keras-team/keras/commit/7360d4f0d764fbb1fa9c6408fe53da41974dd4f6
- github.com/keras-team/keras/commit/f704c887bf459b42769bfc8a9182f838009afddb
- github.com/keras-team/keras/pull/21880
- github.com/keras-team/keras/pull/22081
- nvd.nist.gov/vuln/detail/CVE-2026-0897