CVE-2020-15211: TensorFlow Lite: heap OOB RW via flatbuffer tensor index

MEDIUM PoC AVAILABLE
Published September 25, 2020
CISO Take

Any deployment loading TFLite flatbuffer models from untrusted sources — edge devices, model serving APIs, mobile apps — is exposed to heap out-of-bounds read/write, potentially leading to code execution or memory disclosure. Patch immediately to TFLite 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1; if patching is delayed, add a custom Verifier that rejects -1 tensor indices on non-optional operators. This is a 2020 vulnerability — if your TFLite versions are still unpatched in 2026, your AI supply chain hygiene needs urgent attention.

Risk Assessment

CVSS 4.8 medium with high attack complexity underestimates operational risk for organizations exposing model-loading endpoints. The write gadget is offset-constrained, reducing arbitrary-RCE likelihood, but read/write primitives on heap-allocated tensor arrays in model serving infrastructure can enable memory disclosure of inference inputs or lateral movement within ML pipelines. Attack complexity is high because exploiting this requires crafting a precisely malformed flatbuffer model, but this craft is well within reach of skilled adversaries with knowledge of the TFLite format. Not in CISA KEV and no public active exploitation confirmed as of analysis date.

Affected Systems

Package Ecosystem Vulnerable Range Patched
tensorflow pip No patch
195.0K OpenSSF 7.2 3.7K dependents Pushed 6d ago 4% patched ~1372d to patch Full package profile →
leap No patch

Severity & Risk

CVSS 3.1
4.8 / 10
EPSS
0.3%
chance of exploitation in 30 days
Higher than 57% of all CVEs
Exploitation Status
Exploit Available
Exploitation: MEDIUM
Sophistication
Advanced
Exploitation Confidence
medium
Public PoC indexed (trickest/cve)
Composite signal derived from CISA KEV, CISA SSVC, EPSS, trickest/cve, and Nuclei templates.

Attack Surface

AV AC PR UI S C I A
AV Network
AC High
PR None
UI None
S Unchanged
C Low
I Low
A None

Recommended Action

5 steps
  1. PATCH

    Upgrade TFLite to 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1 — patches applied across 6 commits (46d5b08, 00302787, e11f5558, cd31fd0c, 1970c21, fff2c83).

  2. WORKAROUND (if patching delayed): Implement a custom Verifier at model load time that enforces: (a) only operators explicitly supporting optional inputs may use -1 tensor index; (b) -1 is only permitted on tensor slots declared as optional in the operator spec.

  3. DETECT

    Audit model loading code for flatbuffer deserialization without index validation. Scan artifact repositories for .tflite files with unexpected negative index values using flatbuffers tooling.

  4. HARDEN

    Never load TFLite models from untrusted or unverified sources without schema validation. Apply input validation at model ingestion boundaries.

  5. INVENTORY

    Identify all TFLite consumers in your ML supply chain, including third-party SDK dependencies that bundle TFLite.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Art.15 - Accuracy, robustness and cybersecurity
ISO 42001
A.6.2.6 - Management of AI system vulnerabilities
NIST AI RMF
MANAGE 2.2 - Mechanisms to sustain treatment of identified risks
OWASP LLM Top 10
LLM03:2025 - Supply Chain

Frequently Asked Questions

What is CVE-2020-15211?

Any deployment loading TFLite flatbuffer models from untrusted sources — edge devices, model serving APIs, mobile apps — is exposed to heap out-of-bounds read/write, potentially leading to code execution or memory disclosure. Patch immediately to TFLite 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1; if patching is delayed, add a custom Verifier that rejects -1 tensor indices on non-optional operators. This is a 2020 vulnerability — if your TFLite versions are still unpatched in 2026, your AI supply chain hygiene needs urgent attention.

Is CVE-2020-15211 actively exploited?

Proof-of-concept exploit code is publicly available for CVE-2020-15211, increasing the risk of exploitation.

How to fix CVE-2020-15211?

1. PATCH: Upgrade TFLite to 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1 — patches applied across 6 commits (46d5b08, 00302787, e11f5558, cd31fd0c, 1970c21, fff2c83). 2. WORKAROUND (if patching delayed): Implement a custom Verifier at model load time that enforces: (a) only operators explicitly supporting optional inputs may use -1 tensor index; (b) -1 is only permitted on tensor slots declared as optional in the operator spec. 3. DETECT: Audit model loading code for flatbuffer deserialization without index validation. Scan artifact repositories for .tflite files with unexpected negative index values using flatbuffers tooling. 4. HARDEN: Never load TFLite models from untrusted or unverified sources without schema validation. Apply input validation at model ingestion boundaries. 5. INVENTORY: Identify all TFLite consumers in your ML supply chain, including third-party SDK dependencies that bundle TFLite.

What systems are affected by CVE-2020-15211?

This vulnerability affects the following AI/ML architecture patterns: model serving, edge AI deployments, inference pipelines, training pipelines.

What is the CVSS score for CVE-2020-15211?

CVE-2020-15211 has a CVSS v3.1 base score of 4.8 (MEDIUM). The EPSS exploitation probability is 0.34%.

Technical Details

NVD Description

In TensorFlow Lite before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, saved models in the flatbuffer format use a double indexing scheme: a model has a set of subgraphs, each subgraph has a set of operators and each operator has a set of input/output tensors. The flatbuffer format uses indices for the tensors, indexing into an array of tensors that is owned by the subgraph. This results in a pattern of double array indexing when trying to get the data of each tensor. However, some operators can have some tensors be optional. To handle this scenario, the flatbuffer model uses a negative `-1` value as index for these tensors. This results in special casing during validation at model loading time. Unfortunately, this means that the `-1` index is a valid tensor index for any operator, including those that don't expect optional inputs and including for output tensors. Thus, this allows writing and reading from outside the bounds of heap allocated arrays, although only at a specific offset from the start of these arrays. This results in both read and write gadgets, albeit very limited in scope. The issue is patched in several commits (46d5b0852, 00302787b7, e11f5558, cd31fd0ce, 1970c21, and fff2c83), and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1. A potential workaround would be to add a custom `Verifier` to the model loading code to ensure that only operators which accept optional inputs use the `-1` special value and only for the tensors that they expect to be optional. Since this allow-list type approach is erro-prone, we advise upgrading to the patched code.

Exploitation Scenario

Adversary targets an inference API that accepts user-uploaded TFLite models for on-device or server-side evaluation. They craft a malicious flatbuffer model where a non-optional operator (e.g., a Conv2D layer) has its output tensor index set to -1. The model passes basic structural validation but at inference time, TFLite dereferences the -1 index, accessing heap memory before the tensor array. The attacker iterates payload variants to leak heap contents (e.g., adjacent model weights or input tensor data), enabling reconnaissance of the inference pipeline. In a more advanced scenario, the OOB write primitive at the constrained offset is chained with a heap grooming technique to corrupt function pointers or vtable entries, achieving code execution within the TFLite process.

Weaknesses (CWE)

CVSS Vector

CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N

Timeline

Published
September 25, 2020
Last Modified
November 21, 2024
First Seen
September 25, 2020

Related Vulnerabilities