CVE-2021-29546: TensorFlow: div-by-zero in QuantizedBiasAdd, C/I/A high

HIGH PoC AVAILABLE
Published May 14, 2021
CISO Take

Any TensorFlow deployment running quantized models on versions prior to 2.5.0 (or the backport series) is vulnerable to a divide-by-zero in QuantizedBiasAdd that yields undefined behavior with full C/I/A impact. While the CVSS vector is local, in containerized ML inference environments 'local' effectively maps to network-reachable: an attacker supplying a crafted model or crafted input tensor shapes can trigger this from an API endpoint. Patch immediately to TF 2.5.0, 2.4.2, 2.3.3, 2.2.3, or 2.1.4 — no workaround exists beyond upgrade.

Risk Assessment

High risk for organizations running quantized TensorFlow models in any production inference context. CVSS 7.8 with low attack complexity and no user interaction required means exploitation is straightforward once access is established. The local attack vector is misleading in cloud-native and containerized deployments where ML serving APIs are network-accessible, effectively elevating the practical exposure surface. The full C:H/I:H/A:H impact means a successful exploit can result in process crash, memory corruption, or potential code execution depending on platform and allocator behavior.

Affected Systems

Package Ecosystem Vulnerable Range Patched
tensorflow pip No patch
195.0K OpenSSF 7.2 3.7K dependents Pushed 6d ago 4% patched ~1372d to patch Full package profile →

Do you use tensorflow? You're affected.

Severity & Risk

CVSS 3.1
7.8 / 10
EPSS
0.0%
chance of exploitation in 30 days
Higher than 1% of all CVEs
Exploitation Status
Exploit Available
Exploitation: MEDIUM
Sophistication
Moderate
Exploitation Confidence
medium
Public PoC indexed (trickest/cve)
Composite signal derived from CISA KEV, CISA SSVC, EPSS, trickest/cve, and Nuclei templates.

Attack Surface

AV AC PR UI S C I A
AV Local
AC Low
PR Low
UI None
S Unchanged
C High
I High
A High

Recommended Action

6 steps
  1. PATCH

    Upgrade TensorFlow to 2.5.0, 2.4.2, 2.3.3, 2.2.3, or 2.1.4 — all contain the fix (commit 67784700).

  2. INVENTORY

    Identify all services running TF inference, including embedded TFLite and TF Serving containers.

  3. ISOLATE

    Until patched, restrict model loading to internally-signed models; reject untrusted SavedModel or frozen graph uploads.

  4. SANDBOX

    Run TF inference workers in isolated containers with seccomp/AppArmor profiles to contain crash blast radius.

  5. DETECT

    Alert on unexpected inference worker crashes or restarts — they may indicate exploitation attempts.

  6. VALIDATE

    For CI/CD pipelines, enforce model provenance checks before promotion to production serving.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Article 15 - Accuracy, robustness and cybersecurity for high-risk AI systems
ISO 42001
A.10.1 - AI system security — vulnerability management
NIST AI RMF
MANAGE-2.2 - Risks from third-party AI components are tracked and addressed
OWASP LLM Top 10
LLM05:2025 - Improper Output Handling / Supply Chain Vulnerabilities

Frequently Asked Questions

What is CVE-2021-29546?

Any TensorFlow deployment running quantized models on versions prior to 2.5.0 (or the backport series) is vulnerable to a divide-by-zero in QuantizedBiasAdd that yields undefined behavior with full C/I/A impact. While the CVSS vector is local, in containerized ML inference environments 'local' effectively maps to network-reachable: an attacker supplying a crafted model or crafted input tensor shapes can trigger this from an API endpoint. Patch immediately to TF 2.5.0, 2.4.2, 2.3.3, 2.2.3, or 2.1.4 — no workaround exists beyond upgrade.

Is CVE-2021-29546 actively exploited?

Proof-of-concept exploit code is publicly available for CVE-2021-29546, increasing the risk of exploitation.

How to fix CVE-2021-29546?

1. PATCH: Upgrade TensorFlow to 2.5.0, 2.4.2, 2.3.3, 2.2.3, or 2.1.4 — all contain the fix (commit 67784700). 2. INVENTORY: Identify all services running TF inference, including embedded TFLite and TF Serving containers. 3. ISOLATE: Until patched, restrict model loading to internally-signed models; reject untrusted SavedModel or frozen graph uploads. 4. SANDBOX: Run TF inference workers in isolated containers with seccomp/AppArmor profiles to contain crash blast radius. 5. DETECT: Alert on unexpected inference worker crashes or restarts — they may indicate exploitation attempts. 6. VALIDATE: For CI/CD pipelines, enforce model provenance checks before promotion to production serving.

What systems are affected by CVE-2021-29546?

This vulnerability affects the following AI/ML architecture patterns: model serving, training pipelines, edge inference.

What is the CVSS score for CVE-2021-29546?

CVE-2021-29546 has a CVSS v3.1 base score of 7.8 (HIGH). The EPSS exploitation probability is 0.01%.

Technical Details

NVD Description

TensorFlow is an end-to-end open source platform for machine learning. An attacker can trigger an integer division by zero undefined behavior in `tf.raw_ops.QuantizedBiasAdd`. This is because the implementation of the Eigen kernel(https://github.com/tensorflow/tensorflow/blob/61bca8bd5ba8a68b2d97435ddfafcdf2b85672cd/tensorflow/core/kernels/quantization_utils.h#L812-L849) does a division by the number of elements of the smaller input (based on shape) without checking that this is not zero. The fix will be included in TensorFlow 2.5.0. We will also cherrypick this commit on TensorFlow 2.4.2, TensorFlow 2.3.3, TensorFlow 2.2.3 and TensorFlow 2.1.4, as these are also affected and still in supported range.

Exploitation Scenario

An adversary targeting an organization's ML inference API identifies that the endpoint accepts external model uploads (common in MLaaS and internal ML platforms). The adversary crafts a TensorFlow SavedModel containing a QuantizedBiasAdd operation with a bias tensor explicitly shaped to zero elements. On model load and first inference call, the Eigen kernel attempts to divide by the number of elements of the bias tensor, triggering an integer division by zero. On Linux x86, this raises SIGFPE; combined with the undefined behavior at C++ level, the outcome ranges from process termination (DoS of the inference service) to, depending on compiler and runtime, potential memory corruption exploitable for code execution within the serving container.

Weaknesses (CWE)

CVSS Vector

CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H

Timeline

Published
May 14, 2021
Last Modified
November 21, 2024
First Seen
May 14, 2021

Related Vulnerabilities