CVE-2020-15213: TensorFlow Lite: OOM DoS via crafted segment sum model
MEDIUM PoC AVAILABLEA specially crafted TFLite model can trigger unbounded memory allocation via manipulated segment IDs, crashing any inference service that loads it. If your ML pipeline accepts externally-sourced or user-provided TFLite models, patch to TF 2.2.1+ or 2.3.1+ immediately. If patching is not immediate, deploy a custom Verifier to cap segment ID values before model loading.
Risk Assessment
Medium risk overall, but elevated for organizations running TFLite inference services that consume externally-sourced models. Exploit requires crafting a malicious model file (moderate effort), but no authentication or user interaction is needed once the model reaches a vulnerable loader. Network-accessible inference APIs that accept model uploads are the highest-risk surface. No evidence of active exploitation in the wild.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| tensorflow | pip | — | No patch |
Do you use tensorflow? You're affected.
Severity & Risk
Attack Surface
Recommended Action
5 steps-
PATCH
Upgrade TensorFlow to 2.2.1 or 2.3.1 minimum.
-
WORKAROUND (static segment IDs): Add a custom TFLite Verifier that enforces maximum allowable values in segment IDs tensors before model execution.
-
WORKAROUND (runtime segment IDs): Add bounds validation between inference steps when segment IDs are generated as intermediate tensor outputs.
-
ARCHITECTURAL
Implement model integrity controls — only load models from trusted, signed registries; sandbox model loading in isolated processes with memory limits to contain blast radius.
-
DETECT
Monitor inference worker OOM crashes and unexpected process restarts as potential exploitation indicators.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2020-15213?
A specially crafted TFLite model can trigger unbounded memory allocation via manipulated segment IDs, crashing any inference service that loads it. If your ML pipeline accepts externally-sourced or user-provided TFLite models, patch to TF 2.2.1+ or 2.3.1+ immediately. If patching is not immediate, deploy a custom Verifier to cap segment ID values before model loading.
Is CVE-2020-15213 actively exploited?
Proof-of-concept exploit code is publicly available for CVE-2020-15213, increasing the risk of exploitation.
How to fix CVE-2020-15213?
1. PATCH: Upgrade TensorFlow to 2.2.1 or 2.3.1 minimum. 2. WORKAROUND (static segment IDs): Add a custom TFLite Verifier that enforces maximum allowable values in segment IDs tensors before model execution. 3. WORKAROUND (runtime segment IDs): Add bounds validation between inference steps when segment IDs are generated as intermediate tensor outputs. 4. ARCHITECTURAL: Implement model integrity controls — only load models from trusted, signed registries; sandbox model loading in isolated processes with memory limits to contain blast radius. 5. DETECT: Monitor inference worker OOM crashes and unexpected process restarts as potential exploitation indicators.
What systems are affected by CVE-2020-15213?
This vulnerability affects the following AI/ML architecture patterns: model serving, inference pipelines, edge ML deployments, training pipelines.
What is the CVSS score for CVE-2020-15213?
CVE-2020-15213 has a CVSS v3.1 base score of 4.0 (MEDIUM). The EPSS exploitation probability is 0.22%.
Technical Details
NVD Description
In TensorFlow Lite before versions 2.2.1 and 2.3.1, models using segment sum can trigger a denial of service by causing an out of memory allocation in the implementation of segment sum. Since code uses the last element of the tensor holding them to determine the dimensionality of output tensor, attackers can use a very large value to trigger a large allocation. The issue is patched in commit 204945b19e44b57906c9344c0d00120eeeae178a and is released in TensorFlow versions 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to limit the maximum value in the segment ids tensor. This only handles the case when the segment ids are stored statically in the model, but a similar validation could be done if the segment ids are generated at runtime, between inference steps. However, if the segment ids are generated as outputs of a tensor during inference steps, then there are no possible workaround and users are advised to upgrade to patched code.
Exploitation Scenario
An adversary targets a TFLite-based image classification API that allows customers to upload custom models for fine-tuned inference. They craft a malicious .tflite model file embedding a segment sum operation with the last segment ID tensor element set to a value like 2^30, causing the inference runtime to attempt allocating gigabytes of memory. When the API loads and executes the model, the worker process crashes with OOM. By repeatedly uploading such models, the adversary sustains a denial-of-service condition against the inference fleet, degrading availability without needing credentials or exploiting complex logic.
Weaknesses (CWE)
CVSS Vector
CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:C/C:N/I:N/A:L References
Timeline
Related Vulnerabilities
CVE-2020-15196 9.9 TensorFlow: heap OOB read in sparse/ragged count ops
Same package: tensorflow CVE-2020-15205 9.8 TensorFlow: heap overflow in StringNGrams, ASLR bypass
Same package: tensorflow CVE-2020-15208 9.8 TFLite: OOB read/write via tensor dimension mismatch
Same package: tensorflow CVE-2019-16778 9.8 TensorFlow: heap overflow in UnsortedSegmentSum op
Same package: tensorflow CVE-2022-23587 9.8 TensorFlow: integer overflow in Grappler enables RCE
Same package: tensorflow
AI Threat Alert