CVE-2025-12058: Keras: safe_mode bypass enables file read and SSRF
GHSA-mq84-hjqx-cwf2 MEDIUM PoC AVAILABLEIf your ML infrastructure loads Keras model files from external or untrusted sources, you are exposed even when safe_mode=True is explicitly set — the documented security control is ineffective. A crafted .keras archive can silently exfiltrate local files (API keys, cloud credentials, /etc/passwd) or pivot to internal services via SSRF using cloud storage handlers. Patch to keras >= 3.12.0 immediately and treat any model file from an uncontrolled source as untrusted input until verified.
Risk Assessment
Contextual risk is higher than the medium CVSS baseline suggests. The critical factor is the bypass of safe_mode=True — a control that organizations explicitly invoke when loading potentially untrusted models, meaning defenders have a false sense of security. SSRF via tf.io.gfile is particularly dangerous in cloud environments where GCS/S3/HDFS handlers and the EC2/GCP metadata service (169.254.169.254) are reachable from within training or serving infrastructure. Exposure is significant wherever model-sharing workflows exist: MLOps platforms, collaborative notebooks, model hubs, or CI/CD pipelines that pull models from external registries. EPSS is currently low (0.00076) but the attack is trivial to craft once the researcher disclosure is public.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| keras | pip | < 3.12.0 | 3.12.0 |
Do you use keras? You're affected.
Severity & Risk
Recommended Action
6 steps-
PATCH
Upgrade keras to >= 3.12.0 immediately. Verify with pip show keras.
-
AUDIT
Inventory all code paths using keras.Model.load_model() or tf.keras.models.load_model() and identify where model files originate.
-
ISOLATE
Until patched, run model loading in network-restricted environments (no outbound HTTP/HTTPS, no access to metadata endpoints). Block 169.254.169.254 at the host/container level.
-
VALIDATE INPUT
Never accept .keras files from untrusted sources without sandbox inspection. Consider hash verification against a trusted artifact registry.
-
DETECT
Monitor for unexpected outbound HTTP requests or unusual file reads (esp. /etc/, ~/.ssh/, credential files) from ML training/serving processes.
-
WORKAROUND (if patch is not immediately possible): Wrap load_model calls in a sandboxed subprocess with restricted filesystem and network access using seccomp/namespaces or gVisor.
CISA SSVC Assessment
Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2025-12058?
If your ML infrastructure loads Keras model files from external or untrusted sources, you are exposed even when safe_mode=True is explicitly set — the documented security control is ineffective. A crafted .keras archive can silently exfiltrate local files (API keys, cloud credentials, /etc/passwd) or pivot to internal services via SSRF using cloud storage handlers. Patch to keras >= 3.12.0 immediately and treat any model file from an uncontrolled source as untrusted input until verified.
Is CVE-2025-12058 actively exploited?
Proof-of-concept exploit code is publicly available for CVE-2025-12058, increasing the risk of exploitation.
How to fix CVE-2025-12058?
1. PATCH: Upgrade keras to >= 3.12.0 immediately. Verify with pip show keras. 2. AUDIT: Inventory all code paths using keras.Model.load_model() or tf.keras.models.load_model() and identify where model files originate. 3. ISOLATE: Until patched, run model loading in network-restricted environments (no outbound HTTP/HTTPS, no access to metadata endpoints). Block 169.254.169.254 at the host/container level. 4. VALIDATE INPUT: Never accept .keras files from untrusted sources without sandbox inspection. Consider hash verification against a trusted artifact registry. 5. DETECT: Monitor for unexpected outbound HTTP requests or unusual file reads (esp. /etc/, ~/.ssh/, credential files) from ML training/serving processes. 6. WORKAROUND (if patch is not immediately possible): Wrap load_model calls in a sandboxed subprocess with restricted filesystem and network access using seccomp/namespaces or gVisor.
What systems are affected by CVE-2025-12058?
This vulnerability affects the following AI/ML architecture patterns: model serving, training pipelines, MLOps platforms, model registries, collaborative notebooks.
What is the CVSS score for CVE-2025-12058?
No CVSS score has been assigned yet.
Technical Details
NVD Description
The Keras.Model.load_model method, including when executed with the intended security mitigation safe_mode=True, is vulnerable to arbitrary local file loading and Server-Side Request Forgery (SSRF). This vulnerability stems from the way the StringLookup layer is handled during model loading from a specially crafted .keras archive. The constructor for the StringLookup layer accepts a vocabulary argument that can specify a local file path or a remote file path. * Arbitrary Local File Read: An attacker can create a malicious .keras file that embeds a local path in the StringLookup layer's configuration. When the model is loaded, Keras will attempt to read the content of the specified local file and incorporate it into the model state (e.g., retrievable via get_vocabulary()), allowing an attacker to read arbitrary local files on the hosting system. * Server-Side Request Forgery (SSRF): Keras utilizes tf.io.gfile for file operations. Since tf.io.gfile supports remote filesystem handlers (such as GCS and HDFS) and HTTP/HTTPS protocols, the same mechanism can be leveraged to fetch content from arbitrary network endpoints on the server's behalf, resulting in an SSRF condition. The security issue is that the feature allowing external path loading was not properly restricted by the safe_mode=True flag, which was intended to prevent such unintended data access.
Exploitation Scenario
An attacker targets an ML platform that accepts user-submitted .keras model files for evaluation or fine-tuning. They craft a malicious .keras archive containing a StringLookup layer with vocabulary set to a local file path (e.g., /var/run/secrets/kubernetes.io/serviceaccount/token or /root/.aws/credentials). The victim platform calls keras.Model.load_model('malicious.keras', safe_mode=True) — believing safe_mode protects them. Keras reads the target file and populates the vocabulary state. The attacker retrieves the exfiltrated content via the model's get_vocabulary() API call or inference response. In a cloud environment, the attacker alternatively sets vocabulary to https://169.254.169.254/latest/meta-data/iam/security-credentials/ to harvest IAM credentials via SSRF, enabling lateral movement across the cloud account.
Weaknesses (CWE)
References
Timeline
Related Vulnerabilities
CVE-2025-49655 9.8 keras: Deserialization enables RCE
Same package: keras CVE-2025-1550 9.8 Keras: safe_mode bypass enables RCE via model loading
Same package: keras CVE-2024-3660 9.8 Keras: RCE via malicious model deserialization
Same package: keras CVE-2024-49326 9.8 Affiliator WP Plugin: Unauthenticated Web Shell Upload
Same package: keras CVE-2025-12060 9.8 keras: Path Traversal enables file access
Same package: keras
AI Threat Alert