CVE-2022-23594: TensorFlow MLIR: heap OOB via malicious SavedModel file

MEDIUM
Published February 4, 2022
CISO Take

This vulnerability allows a local attacker with low privileges to crash TensorFlow's MLIR import pipeline—and potentially achieve heap memory corruption—by supplying a crafted SavedModel file. The practical risk is highest in ML pipelines that load externally-sourced or user-supplied models, where the 'local' constraint is effectively bypassed. Patch to TF 2.8.0 or the relevant patched release immediately, and enforce strict allowlisting of model provenance in any pipeline that loads SavedModel artifacts.

Risk Assessment

Medium severity overall, but context-dependent. The local attack vector (AV:L) and low-privilege requirement limit opportunistic exploitation, yet in ML Ops environments models frequently transit shared storage, artifact registries, and CI/CD pipelines—so 'local' access to a model file is not a high bar. Heap OOB writes (CWE-787) have a non-trivial path to code execution under exploitation-favorable conditions. No CISA KEV listing and no known active exploitation lowers urgency, but the crash-on-load behavior makes denial-of-service trivial for any party who can influence the SavedModel consumed by a target system.

Affected Systems

Package Ecosystem Vulnerable Range Patched
tensorflow pip No patch
195.0K OpenSSF 7.2 3.7K dependents Pushed 6d ago 4% patched ~1372d to patch Full package profile →

Do you use tensorflow? You're affected.

Severity & Risk

CVSS 3.1
5.5 / 10
EPSS
0.0%
chance of exploitation in 30 days
Higher than 5% of all CVEs
Exploitation Status
No known exploitation
Sophistication
Moderate

Attack Surface

AV AC PR UI S C I A
AV Local
AC Low
PR Low
UI None
S Unchanged
C None
I None
A High

Recommended Action

5 steps
  1. Patch: Upgrade TensorFlow to 2.8.0 or apply the security patch from GHSA-9x52-887g-fhc2.

  2. Model provenance control: Enforce cryptographic signing or hash verification of SavedModel artifacts before loading—reject unsigned or unverified models in all automated pipelines.

  3. Sandbox model loading: Run TF model-load operations in isolated subprocesses or containers with restricted memory limits; a crash should not cascade to the serving host.

  4. Registry hygiene: Audit any model registry or shared artifact store for unexpected SavedModel modifications.

  5. Detection: Monitor for Python interpreter crashes or OOM signals originating from TF MLIR import code paths; unexpected core dumps during model loading warrant investigation.

CISA SSVC Assessment

Decision Track
Exploitation none
Automatable No
Technical Impact total

Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.

Classification

Compliance Impact

This CVE is relevant to:

ISO 42001
A.6.2 - AI system risk controls
NIST AI RMF
GOVERN 1.7 - Processes for AI risk identification MANAGE 2.2 - Mechanisms to sustain the value of deployed AI systems
OWASP LLM Top 10
LLM05:2025 - Insecure Output Handling / Supply Chain Vulnerabilities

Frequently Asked Questions

What is CVE-2022-23594?

This vulnerability allows a local attacker with low privileges to crash TensorFlow's MLIR import pipeline—and potentially achieve heap memory corruption—by supplying a crafted SavedModel file. The practical risk is highest in ML pipelines that load externally-sourced or user-supplied models, where the 'local' constraint is effectively bypassed. Patch to TF 2.8.0 or the relevant patched release immediately, and enforce strict allowlisting of model provenance in any pipeline that loads SavedModel artifacts.

Is CVE-2022-23594 actively exploited?

No confirmed active exploitation of CVE-2022-23594 has been reported, but organizations should still patch proactively.

How to fix CVE-2022-23594?

1. Patch: Upgrade TensorFlow to 2.8.0 or apply the security patch from GHSA-9x52-887g-fhc2. 2. Model provenance control: Enforce cryptographic signing or hash verification of SavedModel artifacts before loading—reject unsigned or unverified models in all automated pipelines. 3. Sandbox model loading: Run TF model-load operations in isolated subprocesses or containers with restricted memory limits; a crash should not cascade to the serving host. 4. Registry hygiene: Audit any model registry or shared artifact store for unexpected SavedModel modifications. 5. Detection: Monitor for Python interpreter crashes or OOM signals originating from TF MLIR import code paths; unexpected core dumps during model loading warrant investigation.

What systems are affected by CVE-2022-23594?

This vulnerability affects the following AI/ML architecture patterns: Training pipelines, Model serving, ML framework runtimes, Model registries / artifact stores, MLOps / CI-CD pipelines.

What is the CVSS score for CVE-2022-23594?

CVE-2022-23594 has a CVSS v3.1 base score of 5.5 (MEDIUM). The EPSS exploitation probability is 0.02%.

Technical Details

NVD Description

Tensorflow is an Open Source Machine Learning Framework. The TFG dialect of TensorFlow (MLIR) makes several assumptions about the incoming `GraphDef` before converting it to the MLIR-based dialect. If an attacker changes the `SavedModel` format on disk to invalidate these assumptions and the `GraphDef` is then converted to MLIR-based IR then they can cause a crash in the Python interpreter. Under certain scenarios, heap OOB read/writes are possible. These issues have been discovered via fuzzing and it is possible that more weaknesses exist. We will patch them as they are discovered.

Exploitation Scenario

An adversary with access to a shared model repository (internal artifact store, MLflow registry, S3 bucket, or even a public model hub) uploads a maliciously crafted SavedModel whose GraphDef structure violates the assumptions made by TensorFlow's TFG dialect during MLIR conversion. When an automated training or serving pipeline pulls and loads this model—a routine operation in most MLOps workflows—the MLIR import triggers a heap out-of-bounds read or write. At minimum this crashes the Python process hosting the TF session (denial of service). In an exploitation-favorable heap layout, the OOB write could be leveraged for code execution within the model-loading process, potentially enabling lateral movement within the ML infrastructure or exfiltration of training data and model weights.

CVSS Vector

CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H

Timeline

Published
February 4, 2022
Last Modified
November 21, 2024
First Seen
February 4, 2022

Related Vulnerabilities