CVE-2026-1669: keras: File Control enables path manipulation

GHSA-3m4q-jmj6-r34q HIGH
Published February 11, 2026
CISO Take

CVE-2026-1669 is a high-severity arbitrary file read in Keras 3.0.0–3.13.1 that requires no authentication or user interaction to exploit. Any system that loads .keras model files from untrusted sources — model APIs, MLOps pipelines, collaborative ML platforms — is at risk of credential and secrets exposure. Patch to a fixed Keras version immediately and enforce trusted-source-only model loading across all inference and training infrastructure.

Risk Assessment

HIGH. The CVSS vector (AV:N/AC:L/PR:N/UI:N) means this is network-exploitable with zero friction. The attacker only needs the target to load a crafted model file — no credentials, no click required. Keras is ubiquitous across ML stacks (TensorFlow, JAX, multi-backend pipelines), dramatically widening the blast radius. The confidentiality impact is high; an attacker can read any file accessible to the process — .env files, cloud provider credentials, service account keys, database connection strings. Not currently in CISA KEV, but the low exploitation complexity makes active exploitation likely in the short term.

Affected Systems

Package Ecosystem Vulnerable Range Patched
keras pip No patch
64.1K OpenSSF 7.0 1.5K dependents Pushed 7d ago 53% patched ~32d to patch Full package profile →
keras pip >= 3.13.0, < 3.13.2 3.13.2
64.1K OpenSSF 7.0 1.5K dependents Pushed 7d ago 53% patched ~32d to patch Full package profile →

Severity & Risk

CVSS 3.1
7.5 / 10
EPSS
0.0%
chance of exploitation in 30 days
Higher than 3% of all CVEs
Exploitation Status
No known exploitation
Sophistication
Trivial

Attack Surface

AV AC PR UI S C I A
AV Network
AC Low
PR None
UI None
S Unchanged
C High
I None
A None

Recommended Action

6 steps
  1. PATCH

    Upgrade Keras beyond 3.13.1 to the fixed release as soon as available. Monitor the official Keras changelog and GitHub advisory.

  2. WORKAROUND (if patch unavailable): Implement a custom model loading wrapper that strips or rejects HDF5 external dataset references before passing to Keras.

  3. MODEL SOURCE CONTROL

    Enforce cryptographic signing or hash verification for all model files loaded in production. Reject models from unverified sources at the pipeline ingestion layer.

  4. LEAST PRIVILEGE

    Run model loading processes with a restricted filesystem view (container with read-only mounts, seccomp profiles) limiting accessible paths.

  5. DETECTION

    Alert on file read syscalls from Python/ML processes accessing sensitive paths (/etc, ~/.aws, .env, *.pem, *.key) during model loading operations. Deploy eBPF-based runtime monitoring (Falco or similar) on ML inference nodes.

  6. AUDIT

    Inventory all Keras versions deployed across training, serving, and evaluation environments — include transitive dependencies via pip freeze.

CISA SSVC Assessment

Decision Track
Exploitation none
Automatable No
Technical Impact partial

Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Article 15 - Accuracy, robustness and cybersecurity Article 9 - Risk management system
ISO 42001
A.6.1.2 - AI risk assessment A.6.1.3 - AI system supply chain A.8.3 - AI system security A.8.4 - AI system resources — data and tools for AI system
NIST AI RMF
GOVERN 1.4 - Organizational teams are committed to a culture that considers and communicates AI risk GOVERN 1.7 - Processes for AI risk — third-party dependencies MANAGE 2.2 - Mechanisms to sustain AI risk management
OWASP LLM Top 10
LLM03:2025 - Supply Chain Vulnerabilities LLM04 - Model Supply Chain

Frequently Asked Questions

What is CVE-2026-1669?

CVE-2026-1669 is a high-severity arbitrary file read in Keras 3.0.0–3.13.1 that requires no authentication or user interaction to exploit. Any system that loads .keras model files from untrusted sources — model APIs, MLOps pipelines, collaborative ML platforms — is at risk of credential and secrets exposure. Patch to a fixed Keras version immediately and enforce trusted-source-only model loading across all inference and training infrastructure.

Is CVE-2026-1669 actively exploited?

No confirmed active exploitation of CVE-2026-1669 has been reported, but organizations should still patch proactively.

How to fix CVE-2026-1669?

1. PATCH: Upgrade Keras beyond 3.13.1 to the fixed release as soon as available. Monitor the official Keras changelog and GitHub advisory. 2. WORKAROUND (if patch unavailable): Implement a custom model loading wrapper that strips or rejects HDF5 external dataset references before passing to Keras. 3. MODEL SOURCE CONTROL: Enforce cryptographic signing or hash verification for all model files loaded in production. Reject models from unverified sources at the pipeline ingestion layer. 4. LEAST PRIVILEGE: Run model loading processes with a restricted filesystem view (container with read-only mounts, seccomp profiles) limiting accessible paths. 5. DETECTION: Alert on file read syscalls from Python/ML processes accessing sensitive paths (/etc, ~/.aws, .env, *.pem, *.key) during model loading operations. Deploy eBPF-based runtime monitoring (Falco or similar) on ML inference nodes. 6. AUDIT: Inventory all Keras versions deployed across training, serving, and evaluation environments — include transitive dependencies via pip freeze.

What systems are affected by CVE-2026-1669?

This vulnerability affects the following AI/ML architecture patterns: model serving, training pipelines, MLOps platforms, model registries, CI/CD ML evaluation pipelines, multi-tenant ML inference APIs, agent frameworks using Keras-based models.

What is the CVSS score for CVE-2026-1669?

CVE-2026-1669 has a CVSS v3.1 base score of 7.5 (HIGH). The EPSS exploitation probability is 0.01%.

Technical Details

NVD Description

Arbitrary file read in the model loading mechanism (HDF5 integration) in Keras versions 3.0.0 through 3.13.1 on all supported platforms allows a remote attacker to read local files and disclose sensitive information via a crafted .keras model file utilizing HDF5 external dataset references.

Exploitation Scenario

Adversary crafts a .keras model file embedding HDF5 external dataset references pointing to high-value local paths: /proc/1/environ (environment variables), ~/.aws/credentials, /run/secrets/*, or .env files common in Dockerized ML services. The file is published to a public model hub (e.g., HuggingFace) masquerading as a legitimate fine-tuned model, or submitted via a model evaluation API endpoint. When the target's automated pipeline or ML engineer calls keras.models.load_model() on this file, Keras resolves the external HDF5 references and reads the local files. In an inference API context, the resolved file contents surface in model metadata or error responses, disclosing credentials. An attacker with read access to cloud provider keys achieves full cloud account compromise from a single model file download.

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N

Timeline

Published
February 11, 2026
Last Modified
February 26, 2026
First Seen
February 11, 2026

Related Vulnerabilities