CVE-2022-35971: TensorFlow: DoS via invalid quantization tensor rank

HIGH PoC AVAILABLE
Published September 16, 2022
CISO Take

A network-exploitable denial-of-service in TensorFlow's quantization layer allows any unauthenticated attacker to crash ML inference services by sending a malformed tensor input. If your organization runs TensorFlow model serving endpoints—particularly models using quantization-aware training—patch to TF 2.10.0 or the backported 2.7.2/2.8.1/2.9.1 releases immediately. While this is not a data breach vector, crashing ML inference infrastructure can disrupt production AI-powered products and trigger SLA violations.

Risk Assessment

Moderate operational risk for organizations running exposed TensorFlow serving infrastructure. The CVSS 7.5 score accurately reflects the trivial exploitability: no authentication, no complexity, network-reachable. However, impact is limited strictly to availability—no data exfiltration or code execution. The attack surface narrows to deployments where (1) the model graph includes FakeQuantWithMinMaxVars (quantization-aware training), and (2) the serving layer allows external callers to influence tensor shapes. TF Serving behind an authenticated API gateway significantly reduces exposure.

Affected Systems

Package Ecosystem Vulnerable Range Patched
tensorflow pip No patch
195.0K OpenSSF 7.2 3.7K dependents Pushed today 4% patched ~1372d to patch Full package profile →

Do you use tensorflow? You're affected.

Severity & Risk

CVSS 3.1
7.5 / 10
EPSS
0.1%
chance of exploitation in 30 days
Higher than 20% of all CVEs
Exploitation Status
Exploit Available
Exploitation: MEDIUM
Sophistication
Trivial
Exploitation Confidence
medium
Public PoC indexed (trickest/cve)
Composite signal derived from CISA KEV, CISA SSVC, EPSS, trickest/cve, and Nuclei templates.

Attack Surface

AV AC PR UI S C I A
AV Network
AC Low
PR None
UI None
S Unchanged
C None
I None
A High

Recommended Action

5 steps
  1. PATCH

    Upgrade TensorFlow to 2.10.0, or apply cherrypick patches for 2.9.1, 2.8.1, 2.7.2 (all available in the referenced GitHub commit).

  2. VALIDATE INPUTS

    Add input shape validation at the API boundary before tensors reach the model graph—reject any request where min/max tensors for quantization ops are non-scalar (rank > 0).

  3. ISOLATE

    Place TF Serving behind an authenticated reverse proxy; do not expose raw tensor APIs to unauthenticated callers.

  4. MONITOR

    Alert on repeated TensorFlow CHECK assertion failures in serving logs (grep for 'Check failed' in TF serving stdout/stderr)—repeated failures from a single source indicate active exploitation.

  5. CONTAINER RESTARTS

    Ensure serving containers have health checks and auto-restart policies to minimize downtime if exploited before patching.

CISA SSVC Assessment

Decision Track
Exploitation none
Automatable No
Technical Impact partial

Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Article 15 - Accuracy, robustness and cybersecurity
ISO 42001
A.6.2.6 - AI system availability and resilience
NIST AI RMF
MANAGE-2.2 - Mechanisms to sustain deployed AI system value MEASURE-2.6 - Risk and uncertainty evaluation of AI systems

Frequently Asked Questions

What is CVE-2022-35971?

A network-exploitable denial-of-service in TensorFlow's quantization layer allows any unauthenticated attacker to crash ML inference services by sending a malformed tensor input. If your organization runs TensorFlow model serving endpoints—particularly models using quantization-aware training—patch to TF 2.10.0 or the backported 2.7.2/2.8.1/2.9.1 releases immediately. While this is not a data breach vector, crashing ML inference infrastructure can disrupt production AI-powered products and trigger SLA violations.

Is CVE-2022-35971 actively exploited?

Proof-of-concept exploit code is publicly available for CVE-2022-35971, increasing the risk of exploitation.

How to fix CVE-2022-35971?

1. PATCH: Upgrade TensorFlow to 2.10.0, or apply cherrypick patches for 2.9.1, 2.8.1, 2.7.2 (all available in the referenced GitHub commit). 2. VALIDATE INPUTS: Add input shape validation at the API boundary before tensors reach the model graph—reject any request where min/max tensors for quantization ops are non-scalar (rank > 0). 3. ISOLATE: Place TF Serving behind an authenticated reverse proxy; do not expose raw tensor APIs to unauthenticated callers. 4. MONITOR: Alert on repeated TensorFlow CHECK assertion failures in serving logs (grep for 'Check failed' in TF serving stdout/stderr)—repeated failures from a single source indicate active exploitation. 5. CONTAINER RESTARTS: Ensure serving containers have health checks and auto-restart policies to minimize downtime if exploited before patching.

What systems are affected by CVE-2022-35971?

This vulnerability affects the following AI/ML architecture patterns: model serving, quantization-aware training pipelines, edge/mobile model export pipelines, TFX production pipelines.

What is the CVSS score for CVE-2022-35971?

CVE-2022-35971 has a CVSS v3.1 base score of 7.5 (HIGH). The EPSS exploitation probability is 0.06%.

Technical Details

NVD Description

TensorFlow is an open source platform for machine learning. If `FakeQuantWithMinMaxVars` is given `min` or `max` tensors of a nonzero rank, it results in a `CHECK` fail that can be used to trigger a denial of service attack. We have patched the issue in GitHub commit 785d67a78a1d533759fcd2f5e8d6ef778de849e0. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as these are also affected and still in supported range. There are no known workarounds for this issue.

Exploitation Scenario

An adversary identifies a public-facing TensorFlow Serving endpoint (via Shodan, fingerprinting gRPC port 8500, or HTTP port 8501). They probe the model's signature to discover it uses quantization ops (visible in SavedModel metadata). They craft a REST inference request to /v1/models/target:predict substituting the expected scalar min/max tensors with rank-1 tensors (e.g., shape [2] instead of shape []). The serving process hits the CHECK assertion in FakeQuantWithMinMaxVars, crashes, and the endpoint becomes unavailable. Automating this with a low-rate loop (one request per restart window) maintains persistent denial-of-service against the ML inference layer with minimal network footprint.

Weaknesses (CWE)

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H

Timeline

Published
September 16, 2022
Last Modified
November 21, 2024
First Seen
September 16, 2022

Related Vulnerabilities