CVE-2020-26270: TensorFlow: DoS via zero-length input to LSTM/GRU on CUDA
LOWLow-severity local denial-of-service in TensorFlow's CUDA backend: a zero-length input to LSTM/GRU layers triggers a CHECK failure that crashes the process. If your ML inference infrastructure exposes LSTM/GRU models to user-controlled inputs on GPU hardware, upgrade to a patched version immediately. No data exposure, no remote vector—priority is availability of GPU-backed inference endpoints.
Risk Assessment
Overall risk is low. The local attack vector (AV:L) significantly constrains exploitability to scenarios where an adversary already has local access or can inject inputs into a running inference pipeline. CVSS 3.3 with only availability impact (low). Not in CISA KEV, no public exploitation reported. Risk escalates meaningfully only in multi-tenant or shared inference environments where end-users can submit raw tensor inputs to CUDA-backed models.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| tensorflow | pip | — | No patch |
Do you use tensorflow? You're affected.
Severity & Risk
Attack Surface
Recommended Action
5 steps-
Upgrade TensorFlow to patched versions: 1.15.5, 2.0.4, 2.1.3, 2.2.2, 2.3.2, or 2.4.0+.
-
Workaround if immediate patching is not feasible: add input validation at the serving layer to reject zero-length sequences before they reach the model.
-
In model serving frameworks (TF Serving, Triton), add shape validation middleware that enforces minimum sequence length > 0.
-
Monitor CUDA backend crashes and TF process restarts as anomaly signals.
-
Audit shared inference endpoints that accept raw tensor inputs from untrusted users.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2020-26270?
Low-severity local denial-of-service in TensorFlow's CUDA backend: a zero-length input to LSTM/GRU layers triggers a CHECK failure that crashes the process. If your ML inference infrastructure exposes LSTM/GRU models to user-controlled inputs on GPU hardware, upgrade to a patched version immediately. No data exposure, no remote vector—priority is availability of GPU-backed inference endpoints.
Is CVE-2020-26270 actively exploited?
No confirmed active exploitation of CVE-2020-26270 has been reported, but organizations should still patch proactively.
How to fix CVE-2020-26270?
1. Upgrade TensorFlow to patched versions: 1.15.5, 2.0.4, 2.1.3, 2.2.2, 2.3.2, or 2.4.0+. 2. Workaround if immediate patching is not feasible: add input validation at the serving layer to reject zero-length sequences before they reach the model. 3. In model serving frameworks (TF Serving, Triton), add shape validation middleware that enforces minimum sequence length > 0. 4. Monitor CUDA backend crashes and TF process restarts as anomaly signals. 5. Audit shared inference endpoints that accept raw tensor inputs from untrusted users.
What systems are affected by CVE-2020-26270?
This vulnerability affects the following AI/ML architecture patterns: model serving, training pipelines, inference infrastructure.
What is the CVSS score for CVE-2020-26270?
CVE-2020-26270 has a CVSS v3.1 base score of 3.3 (LOW). The EPSS exploitation probability is 0.02%.
Technical Details
NVD Description
In affected versions of TensorFlow running an LSTM/GRU model where the LSTM/GRU layer receives an input with zero-length results in a CHECK failure when using the CUDA backend. This can result in a query-of-death vulnerability, via denial of service, if users can control the input to the layer. This is fixed in versions 1.15.5, 2.0.4, 2.1.3, 2.2.2, 2.3.2, and 2.4.0.
Exploitation Scenario
An adversary with access to a shared GPU-backed inference endpoint (e.g., an internal ML platform or a public API wrapping a sequence model) submits a prediction request with a zero-length sequence tensor to an LSTM or GRU layer. The CUDA backend hits a CHECK failure and crashes the TensorFlow process. In a production inference server without process supervision or health-check recovery, this results in service outage. In a shared multi-tenant ML platform, a low-privileged user could repeatedly crash the GPU inference worker, causing a sustained denial of service for all tenants.
Weaknesses (CWE)
CVSS Vector
CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:L References
Timeline
Related Vulnerabilities
CVE-2020-15196 9.9 TensorFlow: heap OOB read in sparse/ragged count ops
Same package: tensorflow CVE-2020-15205 9.8 TensorFlow: heap overflow in StringNGrams, ASLR bypass
Same package: tensorflow CVE-2020-15208 9.8 TFLite: OOB read/write via tensor dimension mismatch
Same package: tensorflow CVE-2019-16778 9.8 TensorFlow: heap overflow in UnsortedSegmentSum op
Same package: tensorflow CVE-2022-23587 9.8 TensorFlow: integer overflow in Grappler enables RCE
Same package: tensorflow
AI Threat Alert