CVE-2022-23561: TensorFlow Lite: OOB write, arbitrary write primitive

HIGH
Published February 4, 2022
CISO Take

Any pipeline that loads TFLite model files from untrusted or user-controlled sources is vulnerable to arbitrary memory writes, with a realistic path to code execution inside the inference process. Patch immediately to TensorFlow 2.8.0 (or backports 2.7.1/2.6.3/2.5.3) and restrict model loading to cryptographically verified artifacts. If you operate edge inference, federated learning nodes, or model-upload APIs built on TFLite, treat this as a critical remediation item.

Risk Assessment

CVSS 8.8 with network vector, low complexity, and low privilege requirement makes this highly actionable for any attacker with access to a model ingestion endpoint. The ability to corrupt the memory allocator's linked list elevates a heap OOB write to an arbitrary write primitive — a well-understood building block for full RCE on predictable heap layouts common in containerized ML serving. Not in CISA KEV, suggesting limited confirmed active exploitation, but the exploit primitive is elementary for a motivated attacker with binary exploitation skills. TFLite's broad deployment across mobile, edge, and server inference expands the attack surface considerably.

Affected Systems

Package Ecosystem Vulnerable Range Patched
tensorflow pip No patch
195.0K OpenSSF 7.2 3.7K dependents Pushed 6d ago 4% patched ~1372d to patch Full package profile →

Do you use tensorflow? You're affected.

Severity & Risk

CVSS 3.1
8.8 / 10
EPSS
0.2%
chance of exploitation in 30 days
Higher than 39% of all CVEs
Exploitation Status
No known exploitation
Sophistication
Moderate

Attack Surface

AV AC PR UI S C I A
AV Network
AC Low
PR Low
UI None
S Unchanged
C High
I High
A High

Recommended Action

6 steps
  1. Patch: upgrade to TensorFlow 2.8.0, or apply cherry-picks to 2.7.1, 2.6.3, or 2.5.3 as applicable.

  2. Model provenance: only load TFLite models from cryptographically signed, hash-verified sources — reject unsigned or externally sourced model files.

  3. Sandboxing: isolate TFLite model loading and inference in separate processes or containers with restricted syscall profiles (seccomp, gVisor) to contain blast radius.

  4. Input validation: implement flatbuffer schema validation and size-limit checks before model parsing.

  5. Network controls: enforce strict allowlisting on which services can submit model files to inference endpoints.

  6. Detection: monitor inference processes for heap corruption signals (SIGSEGV, heap sanitizer output, unexpected process crashes) and alert on anomalous model file submissions.

CISA SSVC Assessment

Decision Track
Exploitation none
Automatable No
Technical Impact total

Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Art. 15 - Accuracy, robustness and cybersecurity Art. 9 - Risk management system
ISO 42001
A.10.1 - AI supply chain management A.6.2 - AI system design and development security
NIST AI RMF
MANAGE 2.2 - Mechanisms for AI risk treatment and response MAP 5.1 - AI supply chain and dependency risk assessment

Frequently Asked Questions

What is CVE-2022-23561?

Any pipeline that loads TFLite model files from untrusted or user-controlled sources is vulnerable to arbitrary memory writes, with a realistic path to code execution inside the inference process. Patch immediately to TensorFlow 2.8.0 (or backports 2.7.1/2.6.3/2.5.3) and restrict model loading to cryptographically verified artifacts. If you operate edge inference, federated learning nodes, or model-upload APIs built on TFLite, treat this as a critical remediation item.

Is CVE-2022-23561 actively exploited?

No confirmed active exploitation of CVE-2022-23561 has been reported, but organizations should still patch proactively.

How to fix CVE-2022-23561?

1. Patch: upgrade to TensorFlow 2.8.0, or apply cherry-picks to 2.7.1, 2.6.3, or 2.5.3 as applicable. 2. Model provenance: only load TFLite models from cryptographically signed, hash-verified sources — reject unsigned or externally sourced model files. 3. Sandboxing: isolate TFLite model loading and inference in separate processes or containers with restricted syscall profiles (seccomp, gVisor) to contain blast radius. 4. Input validation: implement flatbuffer schema validation and size-limit checks before model parsing. 5. Network controls: enforce strict allowlisting on which services can submit model files to inference endpoints. 6. Detection: monitor inference processes for heap corruption signals (SIGSEGV, heap sanitizer output, unexpected process crashes) and alert on anomalous model file submissions.

What systems are affected by CVE-2022-23561?

This vulnerability affects the following AI/ML architecture patterns: model serving, edge AI / on-device inference, federated learning nodes, training pipelines, model registries and marketplaces.

What is the CVSS score for CVE-2022-23561?

CVE-2022-23561 has a CVSS v3.1 base score of 8.8 (HIGH). The EPSS exploitation probability is 0.18%.

Technical Details

NVD Description

Tensorflow is an Open Source Machine Learning Framework. An attacker can craft a TFLite model that would cause a write outside of bounds of an array in TFLite. In fact, the attacker can override the linked list used by the memory allocator. This can be leveraged for an arbitrary write primitive under certain conditions. The fix will be included in TensorFlow 2.8.0. We will also cherrypick this commit on TensorFlow 2.7.1, TensorFlow 2.6.3, and TensorFlow 2.5.3, as these are also affected and still in supported range.

Exploitation Scenario

An adversary with low-privilege access to an ML inference API — for example, a model-upload endpoint in a federated learning platform or an internal model testing service — crafts a malformed TFLite flatbuffer file with a specially structured memory allocator region. When the TFLite runtime parses the file, it performs an out-of-bounds write that overwrites the allocator's linked list metadata. On a containerized inference server with a predictable heap layout, the adversary uses this arbitrary write primitive to overwrite a function pointer, redirecting execution to shellcode or a ROP chain. From the compromised inference container the adversary can exfiltrate proprietary model weights, poison inference responses, or pivot laterally within the ML platform's internal network.

Weaknesses (CWE)

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H

Timeline

Published
February 4, 2022
Last Modified
November 21, 2024
First Seen
February 4, 2022

Related Vulnerabilities