CVE-2023-27579: TensorFlow Lite: FPE in tflite model crashes inference runtime

HIGH
Published March 25, 2023
CISO Take

A crafted .tflite model with filter_input_channel < 1 triggers a floating-point exception that crashes any TensorFlow Lite inference process loading it — pure availability impact. If your ML serving or edge inference pipeline accepts externally-supplied or user-uploaded model files, this is directly exploitable with trivial effort. Patch to TF 2.12 or 2.11.1 immediately and add input validation gating model parameter ranges before load.

Risk Assessment

High severity but constrained blast radius. CVSS 7.5 reflects zero prerequisites (no auth, no user interaction, network-reachable) but impact is limited to availability. Real-world risk is elevated in model serving APIs or MLOps pipelines that ingest third-party or user-supplied .tflite files — an attacker can reliably force a crash with a single malformed file. Environments with internal-only model sources and no external model ingestion are at low residual risk post-patch.

Affected Systems

Package Ecosystem Vulnerable Range Patched
tensorflow pip No patch
195.0K OpenSSF 7.2 3.7K dependents Pushed today 4% patched ~1372d to patch Full package profile →

Do you use tensorflow? You're affected.

Severity & Risk

CVSS 3.1
7.5 / 10
EPSS
0.2%
chance of exploitation in 30 days
Higher than 43% of all CVEs
Exploitation Status
No known exploitation
Sophistication
Trivial

Attack Surface

AV AC PR UI S C I A
AV Network
AC Low
PR None
UI None
S Unchanged
C None
I None
A High

Recommended Action

5 steps
  1. PATCH

    Upgrade TensorFlow to >= 2.12 or apply the cherry-pick to 2.11.1. Commit: 34f8368c535253f5c9cb3a303297743b62442aaa.

  2. VALIDATE INPUT

    Add pre-load checks asserting filter_input_channel >= 1 for all tflite models before passing to the runtime.

  3. ISOLATE

    Run TFLite inference in sandboxed worker processes so a FPE crash does not take down the parent service.

  4. RESTRICT

    Enforce allowlist-only model sources — no user-uploaded or unverified model files in production inference paths.

  5. DETECT

    Alert on abnormal inference process restarts; a spike in FPE-related crashes (SIGFPE) is an indicator of exploit attempts.

CISA SSVC Assessment

Decision Track
Exploitation none
Automatable Yes
Technical Impact partial

Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Art.15 - Accuracy, robustness and cybersecurity
ISO 42001
A.6.2.5 - AI system robustness and resilience
NIST AI RMF
MANAGE-2.2 - Mechanisms to sustain oversight of deployed AI
OWASP LLM Top 10
LLM04 - Model Denial of Service

Frequently Asked Questions

What is CVE-2023-27579?

A crafted .tflite model with filter_input_channel < 1 triggers a floating-point exception that crashes any TensorFlow Lite inference process loading it — pure availability impact. If your ML serving or edge inference pipeline accepts externally-supplied or user-uploaded model files, this is directly exploitable with trivial effort. Patch to TF 2.12 or 2.11.1 immediately and add input validation gating model parameter ranges before load.

Is CVE-2023-27579 actively exploited?

No confirmed active exploitation of CVE-2023-27579 has been reported, but organizations should still patch proactively.

How to fix CVE-2023-27579?

1. PATCH: Upgrade TensorFlow to >= 2.12 or apply the cherry-pick to 2.11.1. Commit: 34f8368c535253f5c9cb3a303297743b62442aaa. 2. VALIDATE INPUT: Add pre-load checks asserting filter_input_channel >= 1 for all tflite models before passing to the runtime. 3. ISOLATE: Run TFLite inference in sandboxed worker processes so a FPE crash does not take down the parent service. 4. RESTRICT: Enforce allowlist-only model sources — no user-uploaded or unverified model files in production inference paths. 5. DETECT: Alert on abnormal inference process restarts; a spike in FPE-related crashes (SIGFPE) is an indicator of exploit attempts.

What systems are affected by CVE-2023-27579?

This vulnerability affects the following AI/ML architecture patterns: model serving, edge inference, training pipelines, MLOps CI/CD pipelines.

What is the CVSS score for CVE-2023-27579?

CVE-2023-27579 has a CVSS v3.1 base score of 7.5 (HIGH). The EPSS exploitation probability is 0.21%.

Technical Details

NVD Description

TensorFlow is an end-to-end open source platform for machine learning. Constructing a tflite model with a paramater `filter_input_channel` of less than 1 gives a FPE. This issue has been patched in version 2.12. TensorFlow will also cherrypick the fix commit on TensorFlow 2.11.1.

Exploitation Scenario

An adversary targets an ML-as-a-service endpoint that accepts custom .tflite model uploads (e.g., a mobile app backend or AutoML platform). They craft a minimal .tflite model with a convolution layer where filter_input_channel is set to 0. Uploading and triggering inference against this model causes an immediate FPE, crashing the inference worker. With no rate limiting, the adversary can loop this to keep the service in a crash-restart cycle, achieving sustained denial of service. No ML expertise required — the only knowledge needed is the tflite flatbuffer schema, which is publicly documented.

Weaknesses (CWE)

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H

Timeline

Published
March 25, 2023
Last Modified
November 21, 2024
First Seen
March 25, 2023

Related Vulnerabilities