CVE-2021-37691: TensorFlow TFLite: DoS via crafted model in LSH kernel
MEDIUMA crafted TFLite model can crash TensorFlow Lite inference via division-by-zero in the LSH projection kernel, impacting availability of edge and on-device AI workloads. Risk is confined to local/authenticated contexts where untrusted TFLite models may be loaded — no data exfiltration or RCE. Upgrade to TF 2.6.0, 2.5.1, 2.4.3, or 2.3.4 and enforce model provenance controls on any pipeline ingesting third-party TFLite models.
Risk Assessment
Medium risk overall. Exploitability is moderate — requires the ability to supply a crafted TFLite model, via local access or an untrusted model ingestion path. Impact is strictly availability (crash/DoS); no confidentiality or integrity exposure. Highest exposure in pipelines that load externally-sourced or user-supplied TFLite models without validation. Not in CISA KEV and no evidence of active exploitation.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| tensorflow | pip | — | No patch |
Do you use tensorflow? You're affected.
Severity & Risk
Attack Surface
Recommended Action
5 steps-
Patch: Upgrade TensorFlow to 2.6.0 or apply cherry-picked patches to 2.5.1, 2.4.3, or 2.3.4.
-
Model provenance: Enforce cryptographic model signing and integrity verification before loading any TFLite model from external sources.
-
Input validation: Validate TFLite model files against an expected operator allowlist prior to inference execution.
-
Isolation: Run TFLite inference in sandboxed or containerized environments to contain crash impact.
-
Detection: Monitor inference service processes for unexpected termination signals (SIGFPE, SIGABRT) as potential exploitation indicators.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2021-37691?
A crafted TFLite model can crash TensorFlow Lite inference via division-by-zero in the LSH projection kernel, impacting availability of edge and on-device AI workloads. Risk is confined to local/authenticated contexts where untrusted TFLite models may be loaded — no data exfiltration or RCE. Upgrade to TF 2.6.0, 2.5.1, 2.4.3, or 2.3.4 and enforce model provenance controls on any pipeline ingesting third-party TFLite models.
Is CVE-2021-37691 actively exploited?
No confirmed active exploitation of CVE-2021-37691 has been reported, but organizations should still patch proactively.
How to fix CVE-2021-37691?
1. Patch: Upgrade TensorFlow to 2.6.0 or apply cherry-picked patches to 2.5.1, 2.4.3, or 2.3.4. 2. Model provenance: Enforce cryptographic model signing and integrity verification before loading any TFLite model from external sources. 3. Input validation: Validate TFLite model files against an expected operator allowlist prior to inference execution. 4. Isolation: Run TFLite inference in sandboxed or containerized environments to contain crash impact. 5. Detection: Monitor inference service processes for unexpected termination signals (SIGFPE, SIGABRT) as potential exploitation indicators.
What systems are affected by CVE-2021-37691?
This vulnerability affects the following AI/ML architecture patterns: edge inference, on-device ML (mobile/IoT), model serving, TFLite deployment pipelines.
What is the CVSS score for CVE-2021-37691?
CVE-2021-37691 has a CVSS v3.1 base score of 5.5 (MEDIUM). The EPSS exploitation probability is 0.01%.
Technical Details
NVD Description
TensorFlow is an end-to-end open source platform for machine learning. In affected versions an attacker can craft a TFLite model that would trigger a division by zero error in LSH [implementation](https://github.com/tensorflow/tensorflow/blob/149562d49faa709ea80df1d99fc41d005b81082a/tensorflow/lite/kernels/lsh_projection.cc#L118). We have patched the issue in GitHub commit 0575b640091680cfb70f4dd93e70658de43b94f9. The fix will be included in TensorFlow 2.6.0. We will also cherrypick thiscommit on TensorFlow 2.5.1, TensorFlow 2.4.3, and TensorFlow 2.3.4, as these are also affected and still in supported range.
Exploitation Scenario
An adversary with access to the model ingestion pipeline — via a compromised model registry, supply chain, CI/CD system, or OTA update channel — supplies a TFLite model with LSH projection parameters engineered to produce a zero denominator at inference time. When the runtime loads and executes this model, a division-by-zero triggers a crash, taking down the inference service. In edge deployments with OTA model delivery, this could achieve remote DoS if the update channel lacks integrity controls.
Weaknesses (CWE)
CVSS Vector
CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H References
Timeline
Related Vulnerabilities
CVE-2020-15196 9.9 TensorFlow: heap OOB read in sparse/ragged count ops
Same package: tensorflow CVE-2020-15205 9.8 TensorFlow: heap overflow in StringNGrams, ASLR bypass
Same package: tensorflow CVE-2020-15208 9.8 TFLite: OOB read/write via tensor dimension mismatch
Same package: tensorflow CVE-2019-16778 9.8 TensorFlow: heap overflow in UnsortedSegmentSum op
Same package: tensorflow CVE-2022-23587 9.8 TensorFlow: integer overflow in Grappler enables RCE
Same package: tensorflow
AI Threat Alert