CVE-2021-29590: TensorFlow TFLite: OOB read via empty tensor in Min/Max ops
HIGH PoC AVAILABLEUpgrade all TFLite deployments to TF 2.5.0, 2.4.2, 2.3.3, 2.2.3, or 2.1.4 now. An attacker with local access (including co-tenant processes or container escapes) can craft empty tensor inputs to the Minimum/Maximum operators and read arbitrary heap memory—potentially leaking model weights, intermediate activation data, or process secrets—or crash the inference runtime entirely. If you ship TFLite on edge devices or in containerized serving environments, treat this as priority patching regardless of the 2021 publication date.
Risk Assessment
Risk is HIGH in edge and on-device deployments, MEDIUM in containerized model-serving environments. The local attack vector caps real-world blast radius for internet-exposed systems, but 'local' in ML infrastructure often means a co-tenant microservice, a compromised notebook server, or a malicious TFLite model file distributed through an internal model registry. CVSS confidentiality impact is HIGH (heap OOB read can expose arbitrary process memory), availability impact is HIGH (crash/segfault). No CISA KEV listing and no reported active exploitation reduces urgency, but the low attack complexity (no special skill required once the model file is crafted) keeps this firmly in the patching queue.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| tensorflow | pip | — | No patch |
Do you use tensorflow? You're affected.
Severity & Risk
Attack Surface
Recommended Action
5 steps-
PATCH
Upgrade to TensorFlow 2.5.0, or cherry-pick to 2.4.2 / 2.3.3 / 2.2.3 / 2.1.4 per your branch. Commit 953f28d is the authoritative fix.
-
INPUT VALIDATION
Add pre-inference checks that validate no input tensor has shape with zero-length dimensions before passing to TFLite interpreter. Reject empty tensors at the serving layer.
-
MODEL PROVENANCE
Restrict ingestion of .tflite model files to trusted, signed sources. Implement model registry signing and hash verification to block malicious model file injection.
-
CONTAINER ISOLATION
Ensure TFLite inference processes run with minimal privileges and strong container isolation to limit the impact of a heap read primitive.
-
DETECTION
Monitor for inference process crashes (SIGSEGV/SIGABRT) originating from tflite::reference_ops::MaximumMinimumBroadcast* call stacks—these are indicators of exploitation attempts.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2021-29590?
Upgrade all TFLite deployments to TF 2.5.0, 2.4.2, 2.3.3, 2.2.3, or 2.1.4 now. An attacker with local access (including co-tenant processes or container escapes) can craft empty tensor inputs to the Minimum/Maximum operators and read arbitrary heap memory—potentially leaking model weights, intermediate activation data, or process secrets—or crash the inference runtime entirely. If you ship TFLite on edge devices or in containerized serving environments, treat this as priority patching regardless of the 2021 publication date.
Is CVE-2021-29590 actively exploited?
Proof-of-concept exploit code is publicly available for CVE-2021-29590, increasing the risk of exploitation.
How to fix CVE-2021-29590?
1. PATCH: Upgrade to TensorFlow 2.5.0, or cherry-pick to 2.4.2 / 2.3.3 / 2.2.3 / 2.1.4 per your branch. Commit 953f28d is the authoritative fix. 2. INPUT VALIDATION: Add pre-inference checks that validate no input tensor has shape with zero-length dimensions before passing to TFLite interpreter. Reject empty tensors at the serving layer. 3. MODEL PROVENANCE: Restrict ingestion of .tflite model files to trusted, signed sources. Implement model registry signing and hash verification to block malicious model file injection. 4. CONTAINER ISOLATION: Ensure TFLite inference processes run with minimal privileges and strong container isolation to limit the impact of a heap read primitive. 5. DETECTION: Monitor for inference process crashes (SIGSEGV/SIGABRT) originating from tflite::reference_ops::MaximumMinimumBroadcast* call stacks—these are indicators of exploitation attempts.
What systems are affected by CVE-2021-29590?
This vulnerability affects the following AI/ML architecture patterns: edge inference, on-device ML (mobile/embedded), model serving, training pipelines, containerized inference.
What is the CVSS score for CVE-2021-29590?
CVE-2021-29590 has a CVSS v3.1 base score of 7.1 (HIGH). The EPSS exploitation probability is 0.01%.
Technical Details
NVD Description
TensorFlow is an end-to-end open source platform for machine learning. The implementations of the `Minimum` and `Maximum` TFLite operators can be used to read data outside of bounds of heap allocated objects, if any of the two input tensor arguments are empty. This is because the broadcasting implementation(https://github.com/tensorflow/tensorflow/blob/0d45ea1ca641b21b73bcf9c00e0179cda284e7e7/tensorflow/lite/kernels/internal/reference/maximum_minimum.h#L52-L56) indexes in both tensors with the same index but does not validate that the index is within bounds. The fix will be included in TensorFlow 2.5.0. We will also cherrypick this commit on TensorFlow 2.4.2, TensorFlow 2.3.3, TensorFlow 2.2.3 and TensorFlow 2.1.4, as these are also affected and still in supported range.
Exploitation Scenario
An adversary with access to an internal model registry or model-serving pipeline crafts a TFLite flatbuffer model containing a Minimum or Maximum operator where one input tensor is defined with an empty shape (e.g., shape=[0]). When the TFLite interpreter executes the broadcasting kernel, it iterates using a computed index against both tensors but does not bounds-check against the empty tensor's zero-length buffer. The CPU reads from heap memory adjacent to the empty tensor allocation, potentially exposing the weights of co-loaded model layers, LSTM cell states, or other inference context. In a multi-tenant model serving environment (e.g., a shared inference microservice), this leaks data from other tenants' inference sessions. The same crafted model causes the inference process to segfault if the OOB address is unmapped, enabling a targeted denial of service against a specific inference worker.
Weaknesses (CWE)
CVSS Vector
CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:N/A:H References
- github.com/tensorflow/tensorflow/commit/953f28dca13c92839ba389c055587cfe6c723578 Patch 3rd Party
- github.com/tensorflow/tensorflow/security/advisories/GHSA-24x6-8c7m-hv3f Exploit Patch 3rd Party
Timeline
Related Vulnerabilities
CVE-2020-15196 9.9 TensorFlow: heap OOB read in sparse/ragged count ops
Same package: tensorflow CVE-2020-15205 9.8 TensorFlow: heap overflow in StringNGrams, ASLR bypass
Same package: tensorflow CVE-2020-15208 9.8 TFLite: OOB read/write via tensor dimension mismatch
Same package: tensorflow CVE-2019-16778 9.8 TensorFlow: heap overflow in UnsortedSegmentSum op
Same package: tensorflow CVE-2022-23587 9.8 TensorFlow: integer overflow in Grappler enables RCE
Same package: tensorflow
AI Threat Alert