CVE-2021-37689: TensorFlow Lite: MLIR null ptr deref crashes inference
MEDIUMA crafted TFLite model file can crash any process running TensorFlow Lite inference with MLIR optimization enabled, causing availability loss in AI-enabled applications. Patch to TensorFlow 2.6.0 or apply backport patches for 2.3.x-2.5.x. Priority is moderate: local attack vector limits exposure unless your pipeline accepts externally-supplied model files, which is common in model-serving and edge deployment scenarios.
Risk Assessment
CVSS 5.5 Medium with local attack vector and low privilege requirement reduces opportunistic risk. However, in AI/ML pipelines that ingest third-party or user-uploaded TFLite models, the effective attack surface expands significantly. An adversary who can inject a malicious model file into a processing pipeline upgrades this to a practical DoS. No active exploitation reported and not in CISA KEV, but the fix is straightforward and should be applied as part of normal patching cadence.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| tensorflow | pip | — | No patch |
Do you use tensorflow? You're affected.
Severity & Risk
Attack Surface
Recommended Action
5 steps-
Patch: Upgrade to TensorFlow 2.6.0, or apply cherry-picked commit d6b57f461b39fd1aa8c1b870f1b974aac3554955 on 2.3.x-2.5.x branches.
-
Workaround: Disable MLIR optimization passes if immediate patching is not feasible (--tflite_model_use_legacy_flatbuffer flag or equivalent).
-
Input validation: Validate TFLite model files at ingestion points before passing to the optimizer; reject models with unexpected L2Normalize operator configurations.
-
Isolation: Run TFLite inference in sandboxed processes so a crash does not take down the entire serving stack.
-
Detection: Alert on unexpected process crashes in TFLite serving pods; monitor for repeated crash-restart loops in inference containers.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2021-37689?
A crafted TFLite model file can crash any process running TensorFlow Lite inference with MLIR optimization enabled, causing availability loss in AI-enabled applications. Patch to TensorFlow 2.6.0 or apply backport patches for 2.3.x-2.5.x. Priority is moderate: local attack vector limits exposure unless your pipeline accepts externally-supplied model files, which is common in model-serving and edge deployment scenarios.
Is CVE-2021-37689 actively exploited?
No confirmed active exploitation of CVE-2021-37689 has been reported, but organizations should still patch proactively.
How to fix CVE-2021-37689?
1. Patch: Upgrade to TensorFlow 2.6.0, or apply cherry-picked commit d6b57f461b39fd1aa8c1b870f1b974aac3554955 on 2.3.x-2.5.x branches. 2. Workaround: Disable MLIR optimization passes if immediate patching is not feasible (--tflite_model_use_legacy_flatbuffer flag or equivalent). 3. Input validation: Validate TFLite model files at ingestion points before passing to the optimizer; reject models with unexpected L2Normalize operator configurations. 4. Isolation: Run TFLite inference in sandboxed processes so a crash does not take down the entire serving stack. 5. Detection: Alert on unexpected process crashes in TFLite serving pods; monitor for repeated crash-restart loops in inference containers.
What systems are affected by CVE-2021-37689?
This vulnerability affects the following AI/ML architecture patterns: edge/mobile inference, model serving, training pipelines, model conversion pipelines.
What is the CVSS score for CVE-2021-37689?
CVE-2021-37689 has a CVSS v3.1 base score of 5.5 (MEDIUM). The EPSS exploitation probability is 0.01%.
Technical Details
NVD Description
TensorFlow is an end-to-end open source platform for machine learning. In affected versions an attacker can craft a TFLite model that would trigger a null pointer dereference, which would result in a crash and denial of service. This is caused by the MLIR optimization of `L2NormalizeReduceAxis` operator. The [implementation](https://github.com/tensorflow/tensorflow/blob/149562d49faa709ea80df1d99fc41d005b81082a/tensorflow/compiler/mlir/lite/transforms/optimize.cc#L67-L70) unconditionally dereferences a pointer to an iterator to a vector without checking that the vector has elements. We have patched the issue in GitHub commit d6b57f461b39fd1aa8c1b870f1b974aac3554955. The fix will be included in TensorFlow 2.6.0. We will also cherrypick this commit on TensorFlow 2.5.1, TensorFlow 2.4.3, and TensorFlow 2.3.4, as these are also affected and still in supported range.
Exploitation Scenario
An adversary targets an AI application that accepts user-supplied TFLite models (e.g., a mobile ML platform, federated learning hub, or model conversion service). They craft a malformed TFLite model containing an L2NormalizeReduceAxis operator with a reduction axis vector containing zero elements. When the MLIR optimization pass processes this model, it unconditionally dereferences an iterator to the empty vector, triggering a null pointer dereference and crashing the inference process. In a containerized serving environment, this causes repeated pod restarts, degrading availability. In a batch conversion pipeline, it could block all downstream inference until the malicious model artifact is removed.
Weaknesses (CWE)
CVSS Vector
CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H References
Timeline
Related Vulnerabilities
CVE-2020-15196 9.9 TensorFlow: heap OOB read in sparse/ragged count ops
Same package: tensorflow CVE-2020-15205 9.8 TensorFlow: heap overflow in StringNGrams, ASLR bypass
Same package: tensorflow CVE-2020-15208 9.8 TFLite: OOB read/write via tensor dimension mismatch
Same package: tensorflow CVE-2019-16778 9.8 TensorFlow: heap overflow in UnsortedSegmentSum op
Same package: tensorflow CVE-2022-23587 9.8 TensorFlow: integer overflow in Grappler enables RCE
Same package: tensorflow
AI Threat Alert