CVE-2020-26270: TensorFlow: DoS via zero-length input to LSTM/GRU on CUDA

LOW
Published December 10, 2020
CISO Take

Low-severity local denial-of-service in TensorFlow's CUDA backend: a zero-length input to LSTM/GRU layers triggers a CHECK failure that crashes the process. If your ML inference infrastructure exposes LSTM/GRU models to user-controlled inputs on GPU hardware, upgrade to a patched version immediately. No data exposure, no remote vector—priority is availability of GPU-backed inference endpoints.

Risk Assessment

Overall risk is low. The local attack vector (AV:L) significantly constrains exploitability to scenarios where an adversary already has local access or can inject inputs into a running inference pipeline. CVSS 3.3 with only availability impact (low). Not in CISA KEV, no public exploitation reported. Risk escalates meaningfully only in multi-tenant or shared inference environments where end-users can submit raw tensor inputs to CUDA-backed models.

Affected Systems

Package Ecosystem Vulnerable Range Patched
tensorflow pip No patch
195.0K OpenSSF 7.2 3.7K dependents Pushed 6d ago 4% patched ~1372d to patch Full package profile →

Do you use tensorflow? You're affected.

Severity & Risk

CVSS 3.1
3.3 / 10
EPSS
0.0%
chance of exploitation in 30 days
Higher than 5% of all CVEs
Exploitation Status
No known exploitation
Sophistication
Trivial

Attack Surface

AV AC PR UI S C I A
AV Local
AC Low
PR Low
UI None
S Unchanged
C None
I None
A Low

Recommended Action

5 steps
  1. Upgrade TensorFlow to patched versions: 1.15.5, 2.0.4, 2.1.3, 2.2.2, 2.3.2, or 2.4.0+.

  2. Workaround if immediate patching is not feasible: add input validation at the serving layer to reject zero-length sequences before they reach the model.

  3. In model serving frameworks (TF Serving, Triton), add shape validation middleware that enforces minimum sequence length > 0.

  4. Monitor CUDA backend crashes and TF process restarts as anomaly signals.

  5. Audit shared inference endpoints that accept raw tensor inputs from untrusted users.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Article 15 - Accuracy, robustness and cybersecurity
ISO 42001
A.6.1.3 - AI system operational robustness
NIST AI RMF
MANAGE 2.2 - Mechanisms to sustain effectiveness of AI risk management MAP 5.1 - Likelihood and magnitude of potential impacts identified

Frequently Asked Questions

What is CVE-2020-26270?

Low-severity local denial-of-service in TensorFlow's CUDA backend: a zero-length input to LSTM/GRU layers triggers a CHECK failure that crashes the process. If your ML inference infrastructure exposes LSTM/GRU models to user-controlled inputs on GPU hardware, upgrade to a patched version immediately. No data exposure, no remote vector—priority is availability of GPU-backed inference endpoints.

Is CVE-2020-26270 actively exploited?

No confirmed active exploitation of CVE-2020-26270 has been reported, but organizations should still patch proactively.

How to fix CVE-2020-26270?

1. Upgrade TensorFlow to patched versions: 1.15.5, 2.0.4, 2.1.3, 2.2.2, 2.3.2, or 2.4.0+. 2. Workaround if immediate patching is not feasible: add input validation at the serving layer to reject zero-length sequences before they reach the model. 3. In model serving frameworks (TF Serving, Triton), add shape validation middleware that enforces minimum sequence length > 0. 4. Monitor CUDA backend crashes and TF process restarts as anomaly signals. 5. Audit shared inference endpoints that accept raw tensor inputs from untrusted users.

What systems are affected by CVE-2020-26270?

This vulnerability affects the following AI/ML architecture patterns: model serving, training pipelines, inference infrastructure.

What is the CVSS score for CVE-2020-26270?

CVE-2020-26270 has a CVSS v3.1 base score of 3.3 (LOW). The EPSS exploitation probability is 0.02%.

Technical Details

NVD Description

In affected versions of TensorFlow running an LSTM/GRU model where the LSTM/GRU layer receives an input with zero-length results in a CHECK failure when using the CUDA backend. This can result in a query-of-death vulnerability, via denial of service, if users can control the input to the layer. This is fixed in versions 1.15.5, 2.0.4, 2.1.3, 2.2.2, 2.3.2, and 2.4.0.

Exploitation Scenario

An adversary with access to a shared GPU-backed inference endpoint (e.g., an internal ML platform or a public API wrapping a sequence model) submits a prediction request with a zero-length sequence tensor to an LSTM or GRU layer. The CUDA backend hits a CHECK failure and crashes the TensorFlow process. In a production inference server without process supervision or health-check recovery, this results in service outage. In a shared multi-tenant ML platform, a low-privileged user could repeatedly crash the GPU inference worker, causing a sustained denial of service for all tenants.

Weaknesses (CWE)

CVSS Vector

CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:L

Timeline

Published
December 10, 2020
Last Modified
November 21, 2024
First Seen
December 10, 2020

Related Vulnerabilities