CVE-2022-23580: TensorFlow: uncontrolled allocation DoS in shape inference

MEDIUM PoC AVAILABLE CISA: TRACK*
Published February 4, 2022
CISO Take

Any TensorFlow deployment exposing model inference to authenticated users is vulnerable to targeted availability attacks via crafted tensor inputs. The CVSS 6.5 rating understates operational risk in production ML serving: a low-privilege API consumer can crash your inference service with a single request. Patch immediately to TF 2.8.0, 2.7.1, 2.6.3, or 2.5.3 and enforce input shape validation at the API gateway layer.

Risk Assessment

Medium severity by CVSS but operationally high-risk for ML serving infrastructure. Exploitation requires only low privileges (API key or authenticated session), no user interaction, and low complexity — making it accessible to any adversary with inference API access. Impact is confined to availability (no data exfiltration path), but in production AI systems, inference downtime directly translates to service outages and SLA violations. Organizations running multi-tenant ML platforms or exposing TensorFlow serving APIs externally face the highest exposure.

Affected Systems

Package Ecosystem Vulnerable Range Patched
tensorflow pip No patch
195.0K OpenSSF 7.2 3.7K dependents Pushed 6d ago 4% patched ~1372d to patch Full package profile →

Do you use tensorflow? You're affected.

Severity & Risk

CVSS 3.1
6.5 / 10
EPSS
0.3%
chance of exploitation in 30 days
Higher than 53% of all CVEs
Exploitation Status
Exploit Available
Exploitation: MEDIUM
Sophistication
Trivial
Exploitation Confidence
medium
CISA SSVC: Public PoC
Public PoC indexed (trickest/cve)
Composite signal derived from CISA KEV, CISA SSVC, EPSS, trickest/cve, and Nuclei templates.

Attack Surface

AV AC PR UI S C I A
AV Network
AC Low
PR Low
UI None
S Unchanged
C None
I None
A High

Recommended Action

6 steps
  1. PATCH

    Upgrade to TensorFlow 2.8.0, 2.7.1, 2.6.3, or 2.5.3 — apply the fix from commit 1361fb7e.

  2. VALIDATE INPUT

    Enforce strict tensor shape and size limits at the API gateway before requests reach TF runtime; reject any tensor dimension exceeding expected bounds.

  3. RATE LIMIT

    Apply per-user/per-key rate limiting on inference endpoints to contain blast radius from abuse.

  4. RESOURCE LIMITS

    Configure OOM kill policies and container memory limits on inference pods to enable fast recovery.

  5. DETECT

    Monitor for sudden memory spikes or service restarts on inference nodes as indicators of exploitation attempts.

  6. ISOLATE

    Run inference services in isolated containers/processes so a crash does not cascade to adjacent services.

CISA SSVC Assessment

Decision Track*
Exploitation poc
Automatable No
Technical Impact partial

Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Article 15 - Accuracy, robustness and cybersecurity
ISO 42001
A.9.3 - AI system operation and monitoring
NIST AI RMF
MANAGE-2.2 - Mechanisms to sustain deployed AI and apply risk mitigations
OWASP LLM Top 10
LLM10:2025 - Unbounded Consumption

Frequently Asked Questions

What is CVE-2022-23580?

Any TensorFlow deployment exposing model inference to authenticated users is vulnerable to targeted availability attacks via crafted tensor inputs. The CVSS 6.5 rating understates operational risk in production ML serving: a low-privilege API consumer can crash your inference service with a single request. Patch immediately to TF 2.8.0, 2.7.1, 2.6.3, or 2.5.3 and enforce input shape validation at the API gateway layer.

Is CVE-2022-23580 actively exploited?

Proof-of-concept exploit code is publicly available for CVE-2022-23580, increasing the risk of exploitation.

How to fix CVE-2022-23580?

1. PATCH: Upgrade to TensorFlow 2.8.0, 2.7.1, 2.6.3, or 2.5.3 — apply the fix from commit 1361fb7e. 2. VALIDATE INPUT: Enforce strict tensor shape and size limits at the API gateway before requests reach TF runtime; reject any tensor dimension exceeding expected bounds. 3. RATE LIMIT: Apply per-user/per-key rate limiting on inference endpoints to contain blast radius from abuse. 4. RESOURCE LIMITS: Configure OOM kill policies and container memory limits on inference pods to enable fast recovery. 5. DETECT: Monitor for sudden memory spikes or service restarts on inference nodes as indicators of exploitation attempts. 6. ISOLATE: Run inference services in isolated containers/processes so a crash does not cascade to adjacent services.

What systems are affected by CVE-2022-23580?

This vulnerability affects the following AI/ML architecture patterns: model serving, training pipelines, inference endpoints, ML platform APIs.

What is the CVSS score for CVE-2022-23580?

CVE-2022-23580 has a CVSS v3.1 base score of 6.5 (MEDIUM). The EPSS exploitation probability is 0.30%.

Technical Details

NVD Description

Tensorflow is an Open Source Machine Learning Framework. During shape inference, TensorFlow can allocate a large vector based on a value from a tensor controlled by the user. The fix will be included in TensorFlow 2.8.0. We will also cherrypick this commit on TensorFlow 2.7.1, TensorFlow 2.6.3, and TensorFlow 2.5.3, as these are also affected and still in supported range.

Exploitation Scenario

An attacker with a valid API key to a TensorFlow Serving endpoint constructs a crafted inference request containing a tensor with an extremely large dimension value in a field that feeds into shape inference. When TF processes the request, shape_inference.cc:788-790 allocates a vector sized by the attacker-controlled value, exhausting available memory. The inference server process crashes or becomes unresponsive. The attacker can repeat this on recovery to maintain a persistent DoS condition against a competitor's AI API, an internal ML platform, or a security-critical AI decision system (e.g., fraud detection, anomaly detection). No exploit code required — a single malformed gRPC or REST inference request suffices.

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H

Timeline

Published
February 4, 2022
Last Modified
November 21, 2024
First Seen
February 4, 2022

Related Vulnerabilities