CVE-2021-37691: TensorFlow TFLite: DoS via crafted model in LSH kernel

MEDIUM
Published August 12, 2021
CISO Take

A crafted TFLite model can crash TensorFlow Lite inference via division-by-zero in the LSH projection kernel, impacting availability of edge and on-device AI workloads. Risk is confined to local/authenticated contexts where untrusted TFLite models may be loaded — no data exfiltration or RCE. Upgrade to TF 2.6.0, 2.5.1, 2.4.3, or 2.3.4 and enforce model provenance controls on any pipeline ingesting third-party TFLite models.

Risk Assessment

Medium risk overall. Exploitability is moderate — requires the ability to supply a crafted TFLite model, via local access or an untrusted model ingestion path. Impact is strictly availability (crash/DoS); no confidentiality or integrity exposure. Highest exposure in pipelines that load externally-sourced or user-supplied TFLite models without validation. Not in CISA KEV and no evidence of active exploitation.

Affected Systems

Package Ecosystem Vulnerable Range Patched
tensorflow pip No patch
195.0K OpenSSF 7.2 3.7K dependents Pushed 6d ago 4% patched ~1372d to patch Full package profile →

Do you use tensorflow? You're affected.

Severity & Risk

CVSS 3.1
5.5 / 10
EPSS
0.0%
chance of exploitation in 30 days
Higher than 2% of all CVEs
Exploitation Status
No known exploitation
Sophistication
Moderate

Attack Surface

AV AC PR UI S C I A
AV Local
AC Low
PR Low
UI None
S Unchanged
C None
I None
A High

Recommended Action

5 steps
  1. Patch: Upgrade TensorFlow to 2.6.0 or apply cherry-picked patches to 2.5.1, 2.4.3, or 2.3.4.

  2. Model provenance: Enforce cryptographic model signing and integrity verification before loading any TFLite model from external sources.

  3. Input validation: Validate TFLite model files against an expected operator allowlist prior to inference execution.

  4. Isolation: Run TFLite inference in sandboxed or containerized environments to contain crash impact.

  5. Detection: Monitor inference service processes for unexpected termination signals (SIGFPE, SIGABRT) as potential exploitation indicators.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Art. 15 - Accuracy, robustness and cybersecurity
ISO 42001
8.4 - AI system operation and monitoring
NIST AI RMF
MS-2.5 - AI system robustness and reliability testing

Frequently Asked Questions

What is CVE-2021-37691?

A crafted TFLite model can crash TensorFlow Lite inference via division-by-zero in the LSH projection kernel, impacting availability of edge and on-device AI workloads. Risk is confined to local/authenticated contexts where untrusted TFLite models may be loaded — no data exfiltration or RCE. Upgrade to TF 2.6.0, 2.5.1, 2.4.3, or 2.3.4 and enforce model provenance controls on any pipeline ingesting third-party TFLite models.

Is CVE-2021-37691 actively exploited?

No confirmed active exploitation of CVE-2021-37691 has been reported, but organizations should still patch proactively.

How to fix CVE-2021-37691?

1. Patch: Upgrade TensorFlow to 2.6.0 or apply cherry-picked patches to 2.5.1, 2.4.3, or 2.3.4. 2. Model provenance: Enforce cryptographic model signing and integrity verification before loading any TFLite model from external sources. 3. Input validation: Validate TFLite model files against an expected operator allowlist prior to inference execution. 4. Isolation: Run TFLite inference in sandboxed or containerized environments to contain crash impact. 5. Detection: Monitor inference service processes for unexpected termination signals (SIGFPE, SIGABRT) as potential exploitation indicators.

What systems are affected by CVE-2021-37691?

This vulnerability affects the following AI/ML architecture patterns: edge inference, on-device ML (mobile/IoT), model serving, TFLite deployment pipelines.

What is the CVSS score for CVE-2021-37691?

CVE-2021-37691 has a CVSS v3.1 base score of 5.5 (MEDIUM). The EPSS exploitation probability is 0.01%.

Technical Details

NVD Description

TensorFlow is an end-to-end open source platform for machine learning. In affected versions an attacker can craft a TFLite model that would trigger a division by zero error in LSH [implementation](https://github.com/tensorflow/tensorflow/blob/149562d49faa709ea80df1d99fc41d005b81082a/tensorflow/lite/kernels/lsh_projection.cc#L118). We have patched the issue in GitHub commit 0575b640091680cfb70f4dd93e70658de43b94f9. The fix will be included in TensorFlow 2.6.0. We will also cherrypick thiscommit on TensorFlow 2.5.1, TensorFlow 2.4.3, and TensorFlow 2.3.4, as these are also affected and still in supported range.

Exploitation Scenario

An adversary with access to the model ingestion pipeline — via a compromised model registry, supply chain, CI/CD system, or OTA update channel — supplies a TFLite model with LSH projection parameters engineered to produce a zero denominator at inference time. When the runtime loads and executes this model, a division-by-zero triggers a crash, taking down the inference service. In edge deployments with OTA model delivery, this could achieve remote DoS if the update channel lacks integrity controls.

CVSS Vector

CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H

Timeline

Published
August 12, 2021
Last Modified
November 21, 2024
First Seen
August 12, 2021

Related Vulnerabilities