CVE-2022-21741: TensorFlow Lite: DoS via crafted depthwise conv model

MEDIUM PoC AVAILABLE
Published February 3, 2022
CISO Take

TFLite inference processes that accept externally-supplied model files can be crashed by a malicious model exploiting a division-by-zero in the depthwise convolution kernel — no code execution, pure availability impact. For any service accepting user-supplied TFLite models or exposing TFLite inference as an API, this is a trivial DoS with low-privilege access requirements. Patch to TensorFlow 2.8.0+ (or 2.7.1/2.6.3/2.5.3 cherrypicks) and restrict model sources to trusted, signed artifacts.

Risk Assessment

Medium risk overall, aligned with CVSS 6.5. No confidentiality or integrity impact, but the attack requires only network access and low privileges with no user interaction — making it operationally simple to execute repeatedly. The blast radius is limited to availability of the TFLite inference process. Not in CISA KEV and no active exploitation reported, but edge and mobile AI deployments with public-facing inference APIs warrant prompt patching given exploit simplicity.

Affected Systems

Package Ecosystem Vulnerable Range Patched
tensorflow pip No patch
195.0K OpenSSF 7.2 3.7K dependents Pushed 6d ago 4% patched ~1372d to patch Full package profile →

Do you use tensorflow? You're affected.

Severity & Risk

CVSS 3.1
6.5 / 10
EPSS
0.2%
chance of exploitation in 30 days
Higher than 46% of all CVEs
Exploitation Status
Exploit Available
Exploitation: MEDIUM
Sophistication
Trivial
Exploitation Confidence
medium
Public PoC indexed (trickest/cve)
Composite signal derived from CISA KEV, CISA SSVC, EPSS, trickest/cve, and Nuclei templates.

Attack Surface

AV AC PR UI S C I A
AV Network
AC Low
PR Low
UI None
S Unchanged
C None
I None
A High

Recommended Action

5 steps
  1. Patch immediately: Upgrade to TensorFlow 2.8.0+, or apply vendor cherrypicks to 2.7.1, 2.6.3, or 2.5.3.

  2. Restrict model sources: Only load TFLite models from trusted, cryptographically signed sources; reject user-supplied model files in production APIs.

  3. Pre-load validation: Add a model validation step that checks convolution parameters are strictly positive before invoking the runtime.

  4. Process isolation: Run TFLite inference in sandboxed worker processes so a crash does not cascade to the parent service.

  5. Monitor: Alert on repeated inference process crashes or SIGFPE signals; rate-limit model submission endpoints.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Art. 15 - Accuracy, robustness, and cybersecurity for high-risk AI
ISO 42001
A.10.5 - AI system robustness and resilience
NIST AI RMF
MANAGE-2.2 - Risk treatment and monitoring
OWASP LLM Top 10
LLM04 - Model Denial of Service

Frequently Asked Questions

What is CVE-2022-21741?

TFLite inference processes that accept externally-supplied model files can be crashed by a malicious model exploiting a division-by-zero in the depthwise convolution kernel — no code execution, pure availability impact. For any service accepting user-supplied TFLite models or exposing TFLite inference as an API, this is a trivial DoS with low-privilege access requirements. Patch to TensorFlow 2.8.0+ (or 2.7.1/2.6.3/2.5.3 cherrypicks) and restrict model sources to trusted, signed artifacts.

Is CVE-2022-21741 actively exploited?

Proof-of-concept exploit code is publicly available for CVE-2022-21741, increasing the risk of exploitation.

How to fix CVE-2022-21741?

1. Patch immediately: Upgrade to TensorFlow 2.8.0+, or apply vendor cherrypicks to 2.7.1, 2.6.3, or 2.5.3. 2. Restrict model sources: Only load TFLite models from trusted, cryptographically signed sources; reject user-supplied model files in production APIs. 3. Pre-load validation: Add a model validation step that checks convolution parameters are strictly positive before invoking the runtime. 4. Process isolation: Run TFLite inference in sandboxed worker processes so a crash does not cascade to the parent service. 5. Monitor: Alert on repeated inference process crashes or SIGFPE signals; rate-limit model submission endpoints.

What systems are affected by CVE-2022-21741?

This vulnerability affects the following AI/ML architecture patterns: edge inference (TFLite), model serving, mobile AI pipelines, embedded AI systems.

What is the CVSS score for CVE-2022-21741?

CVE-2022-21741 has a CVSS v3.1 base score of 6.5 (MEDIUM). The EPSS exploitation probability is 0.23%.

Technical Details

NVD Description

Tensorflow is an Open Source Machine Learning Framework. ### Impact An attacker can craft a TFLite model that would trigger a division by zero in the implementation of depthwise convolutions. The parameters of the convolution can be user controlled and are also used within a division operation to determine the size of the padding that needs to be added before applying the convolution. There is no check before this division that the divisor is strictly positive. The fix will be included in TensorFlow 2.8.0. We will also cherrypick this commit on TensorFlow 2.7.1, TensorFlow 2.6.3, and TensorFlow 2.5.3, as these are also affected and still in supported range.

Exploitation Scenario

An adversary targeting an organization's edge AI inference API — for example, an image classification service powered by TFLite — crafts a malicious .tflite model with depthwise convolution parameters set to zero. Submitting this model to the inference endpoint (requiring only a low-privilege API key) triggers a division-by-zero in the depthwise convolution padding computation, crashing the inference worker. Repeated submissions constitute a trivial, sustained DoS against the ML service. No specialized ML knowledge is required beyond basic understanding of the TFLite flatbuffer model format, placing this firmly in script-kiddie territory.

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H

Timeline

Published
February 3, 2022
Last Modified
May 5, 2025
First Seen
February 3, 2022

Related Vulnerabilities