CVE-2022-29212: TensorFlow Lite: quantization assert crash (DoS)
MEDIUM PoC AVAILABLE CISA: TRACK*A crafted TFLite quantized model can crash the TFLite interpreter via an assertion failure in the quantization scaling logic. If your deployment loads externally-sourced or user-provided TFLite models, this is a denial-of-service vector. Patch to TF 2.9.0, 2.8.1, 2.7.2, or 2.6.4 immediately and audit model ingestion pipelines for untrusted inputs.
Risk Assessment
Medium risk overall, but context-dependent. CVSS 5.5 reflects local attack vector and availability-only impact. However, in edge AI or mobile inference deployments that accept third-party TFLite models, this becomes a reliable DoS: low complexity, no user interaction after model load. The real exposure is in model marketplaces, federated learning pipelines, or any system where the model file originates outside the organization's control.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| tensorflow | pip | — | No patch |
Do you use tensorflow? You're affected.
Severity & Risk
Attack Surface
Recommended Action
5 steps-
Patch: Upgrade TensorFlow to 2.9.0, 2.8.1, 2.7.2, or 2.6.4.
-
Model validation: Implement cryptographic signing and integrity verification for all TFLite models before loading — reject models from untrusted sources.
-
Sandbox: Run TFLite inference in isolated processes so a crash does not take down the parent service.
-
Detection: Monitor for abnormal process termination or assertion failures in TFLite inference services (SIGABRT signals).
-
Inventory: Audit which services load TFLite models and from what sources, prioritizing externally-sourced model pipelines.
CISA SSVC Assessment
Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2022-29212?
A crafted TFLite quantized model can crash the TFLite interpreter via an assertion failure in the quantization scaling logic. If your deployment loads externally-sourced or user-provided TFLite models, this is a denial-of-service vector. Patch to TF 2.9.0, 2.8.1, 2.7.2, or 2.6.4 immediately and audit model ingestion pipelines for untrusted inputs.
Is CVE-2022-29212 actively exploited?
Proof-of-concept exploit code is publicly available for CVE-2022-29212, increasing the risk of exploitation.
How to fix CVE-2022-29212?
1. Patch: Upgrade TensorFlow to 2.9.0, 2.8.1, 2.7.2, or 2.6.4. 2. Model validation: Implement cryptographic signing and integrity verification for all TFLite models before loading — reject models from untrusted sources. 3. Sandbox: Run TFLite inference in isolated processes so a crash does not take down the parent service. 4. Detection: Monitor for abnormal process termination or assertion failures in TFLite inference services (SIGABRT signals). 5. Inventory: Audit which services load TFLite models and from what sources, prioritizing externally-sourced model pipelines.
What systems are affected by CVE-2022-29212?
This vulnerability affects the following AI/ML architecture patterns: model serving, edge/mobile inference, training pipelines, model distribution pipelines.
What is the CVSS score for CVE-2022-29212?
CVE-2022-29212 has a CVSS v3.1 base score of 5.5 (MEDIUM). The EPSS exploitation probability is 0.11%.
Technical Details
NVD Description
TensorFlow is an open source platform for machine learning. Prior to versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4, certain TFLite models that were created using TFLite model converter would crash when loaded in the TFLite interpreter. The culprit is that during quantization the scale of values could be greater than 1 but code was always assuming sub-unit scaling. Thus, since code was calling `QuantizeMultiplierSmallerThanOneExp`, the `TFLITE_CHECK_LT` assertion would trigger and abort the process. Versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4 contain a patch for this issue.
Exploitation Scenario
An adversary targeting an organization's edge AI inference service (e.g., a computer vision pipeline on IoT devices or mobile apps) crafts a TFLite model with quantization parameters that include a scale value >= 1.0. They publish this model to a public model repository or submit it via a model upload feature. When the vulnerable TFLite interpreter loads the model, the TFLITE_CHECK_LT assertion in QuantizeMultiplierSmallerThanOneExp triggers SIGABRT, crashing the inference process. In a fleet deployment scenario, all devices pulling the same model update simultaneously become unavailable — a scalable, low-effort denial-of-service against production AI infrastructure.
Weaknesses (CWE)
CVSS Vector
CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H References
- github.com/tensorflow/tensorflow/blob/f3b9bf4c3c0597563b289c0512e98d4ce81f886e/tensorflow/lite/kernels/internal/quantization_util.cc 3rd Party
- github.com/tensorflow/tensorflow/commit/a989426ee1346693cc015792f11d715f6944f2b8 Patch 3rd Party
- github.com/tensorflow/tensorflow/issues/43661 Exploit Issue 3rd Party
- github.com/tensorflow/tensorflow/releases/tag/v2.6.4 Release 3rd Party
- github.com/tensorflow/tensorflow/releases/tag/v2.7.2 Release 3rd Party
- github.com/tensorflow/tensorflow/releases/tag/v2.8.1 Release 3rd Party
- github.com/tensorflow/tensorflow/releases/tag/v2.9.0 Release 3rd Party
- github.com/tensorflow/tensorflow/security/advisories/GHSA-8wwm-6264-x792 Exploit Patch 3rd Party
Timeline
Related Vulnerabilities
CVE-2020-15196 9.9 TensorFlow: heap OOB read in sparse/ragged count ops
Same package: tensorflow CVE-2020-15205 9.8 TensorFlow: heap overflow in StringNGrams, ASLR bypass
Same package: tensorflow CVE-2020-15208 9.8 TFLite: OOB read/write via tensor dimension mismatch
Same package: tensorflow CVE-2019-16778 9.8 TensorFlow: heap overflow in UnsortedSegmentSum op
Same package: tensorflow CVE-2022-23587 9.8 TensorFlow: integer overflow in Grappler enables RCE
Same package: tensorflow
AI Threat Alert