CVE-2021-37689: TensorFlow Lite: MLIR null ptr deref crashes inference

MEDIUM
Published August 12, 2021
CISO Take

A crafted TFLite model file can crash any process running TensorFlow Lite inference with MLIR optimization enabled, causing availability loss in AI-enabled applications. Patch to TensorFlow 2.6.0 or apply backport patches for 2.3.x-2.5.x. Priority is moderate: local attack vector limits exposure unless your pipeline accepts externally-supplied model files, which is common in model-serving and edge deployment scenarios.

Risk Assessment

CVSS 5.5 Medium with local attack vector and low privilege requirement reduces opportunistic risk. However, in AI/ML pipelines that ingest third-party or user-uploaded TFLite models, the effective attack surface expands significantly. An adversary who can inject a malicious model file into a processing pipeline upgrades this to a practical DoS. No active exploitation reported and not in CISA KEV, but the fix is straightforward and should be applied as part of normal patching cadence.

Affected Systems

Package Ecosystem Vulnerable Range Patched
tensorflow pip No patch
195.0K OpenSSF 7.2 3.7K dependents Pushed 6d ago 4% patched ~1372d to patch Full package profile →

Do you use tensorflow? You're affected.

Severity & Risk

CVSS 3.1
5.5 / 10
EPSS
0.0%
chance of exploitation in 30 days
Higher than 2% of all CVEs
Exploitation Status
No known exploitation
Sophistication
Moderate

Attack Surface

AV AC PR UI S C I A
AV Local
AC Low
PR Low
UI None
S Unchanged
C None
I None
A High

Recommended Action

5 steps
  1. Patch: Upgrade to TensorFlow 2.6.0, or apply cherry-picked commit d6b57f461b39fd1aa8c1b870f1b974aac3554955 on 2.3.x-2.5.x branches.

  2. Workaround: Disable MLIR optimization passes if immediate patching is not feasible (--tflite_model_use_legacy_flatbuffer flag or equivalent).

  3. Input validation: Validate TFLite model files at ingestion points before passing to the optimizer; reject models with unexpected L2Normalize operator configurations.

  4. Isolation: Run TFLite inference in sandboxed processes so a crash does not take down the entire serving stack.

  5. Detection: Alert on unexpected process crashes in TFLite serving pods; monitor for repeated crash-restart loops in inference containers.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Article 15 - Accuracy, robustness and cybersecurity
ISO 42001
8.5 - AI system operation
NIST AI RMF
GOVERN 6.2 - Policies and procedures for AI risk management MANAGE 2.2 - Mechanisms to respond to risks or incidents

Frequently Asked Questions

What is CVE-2021-37689?

A crafted TFLite model file can crash any process running TensorFlow Lite inference with MLIR optimization enabled, causing availability loss in AI-enabled applications. Patch to TensorFlow 2.6.0 or apply backport patches for 2.3.x-2.5.x. Priority is moderate: local attack vector limits exposure unless your pipeline accepts externally-supplied model files, which is common in model-serving and edge deployment scenarios.

Is CVE-2021-37689 actively exploited?

No confirmed active exploitation of CVE-2021-37689 has been reported, but organizations should still patch proactively.

How to fix CVE-2021-37689?

1. Patch: Upgrade to TensorFlow 2.6.0, or apply cherry-picked commit d6b57f461b39fd1aa8c1b870f1b974aac3554955 on 2.3.x-2.5.x branches. 2. Workaround: Disable MLIR optimization passes if immediate patching is not feasible (--tflite_model_use_legacy_flatbuffer flag or equivalent). 3. Input validation: Validate TFLite model files at ingestion points before passing to the optimizer; reject models with unexpected L2Normalize operator configurations. 4. Isolation: Run TFLite inference in sandboxed processes so a crash does not take down the entire serving stack. 5. Detection: Alert on unexpected process crashes in TFLite serving pods; monitor for repeated crash-restart loops in inference containers.

What systems are affected by CVE-2021-37689?

This vulnerability affects the following AI/ML architecture patterns: edge/mobile inference, model serving, training pipelines, model conversion pipelines.

What is the CVSS score for CVE-2021-37689?

CVE-2021-37689 has a CVSS v3.1 base score of 5.5 (MEDIUM). The EPSS exploitation probability is 0.01%.

Technical Details

NVD Description

TensorFlow is an end-to-end open source platform for machine learning. In affected versions an attacker can craft a TFLite model that would trigger a null pointer dereference, which would result in a crash and denial of service. This is caused by the MLIR optimization of `L2NormalizeReduceAxis` operator. The [implementation](https://github.com/tensorflow/tensorflow/blob/149562d49faa709ea80df1d99fc41d005b81082a/tensorflow/compiler/mlir/lite/transforms/optimize.cc#L67-L70) unconditionally dereferences a pointer to an iterator to a vector without checking that the vector has elements. We have patched the issue in GitHub commit d6b57f461b39fd1aa8c1b870f1b974aac3554955. The fix will be included in TensorFlow 2.6.0. We will also cherrypick this commit on TensorFlow 2.5.1, TensorFlow 2.4.3, and TensorFlow 2.3.4, as these are also affected and still in supported range.

Exploitation Scenario

An adversary targets an AI application that accepts user-supplied TFLite models (e.g., a mobile ML platform, federated learning hub, or model conversion service). They craft a malformed TFLite model containing an L2NormalizeReduceAxis operator with a reduction axis vector containing zero elements. When the MLIR optimization pass processes this model, it unconditionally dereferences an iterator to the empty vector, triggering a null pointer dereference and crashing the inference process. In a containerized serving environment, this causes repeated pod restarts, degrading availability. In a batch conversion pipeline, it could block all downstream inference until the malicious model artifact is removed.

Weaknesses (CWE)

CVSS Vector

CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H

Timeline

Published
August 12, 2021
Last Modified
November 21, 2024
First Seen
August 12, 2021

Related Vulnerabilities