CVE-2023-27579: TensorFlow Lite: FPE in tflite model crashes inference runtime
HIGHA crafted .tflite model with filter_input_channel < 1 triggers a floating-point exception that crashes any TensorFlow Lite inference process loading it — pure availability impact. If your ML serving or edge inference pipeline accepts externally-supplied or user-uploaded model files, this is directly exploitable with trivial effort. Patch to TF 2.12 or 2.11.1 immediately and add input validation gating model parameter ranges before load.
Risk Assessment
High severity but constrained blast radius. CVSS 7.5 reflects zero prerequisites (no auth, no user interaction, network-reachable) but impact is limited to availability. Real-world risk is elevated in model serving APIs or MLOps pipelines that ingest third-party or user-supplied .tflite files — an attacker can reliably force a crash with a single malformed file. Environments with internal-only model sources and no external model ingestion are at low residual risk post-patch.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| tensorflow | pip | — | No patch |
Do you use tensorflow? You're affected.
Severity & Risk
Attack Surface
Recommended Action
5 steps-
PATCH
Upgrade TensorFlow to >= 2.12 or apply the cherry-pick to 2.11.1. Commit: 34f8368c535253f5c9cb3a303297743b62442aaa.
-
VALIDATE INPUT
Add pre-load checks asserting filter_input_channel >= 1 for all tflite models before passing to the runtime.
-
ISOLATE
Run TFLite inference in sandboxed worker processes so a FPE crash does not take down the parent service.
-
RESTRICT
Enforce allowlist-only model sources — no user-uploaded or unverified model files in production inference paths.
-
DETECT
Alert on abnormal inference process restarts; a spike in FPE-related crashes (SIGFPE) is an indicator of exploit attempts.
CISA SSVC Assessment
Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2023-27579?
A crafted .tflite model with filter_input_channel < 1 triggers a floating-point exception that crashes any TensorFlow Lite inference process loading it — pure availability impact. If your ML serving or edge inference pipeline accepts externally-supplied or user-uploaded model files, this is directly exploitable with trivial effort. Patch to TF 2.12 or 2.11.1 immediately and add input validation gating model parameter ranges before load.
Is CVE-2023-27579 actively exploited?
No confirmed active exploitation of CVE-2023-27579 has been reported, but organizations should still patch proactively.
How to fix CVE-2023-27579?
1. PATCH: Upgrade TensorFlow to >= 2.12 or apply the cherry-pick to 2.11.1. Commit: 34f8368c535253f5c9cb3a303297743b62442aaa. 2. VALIDATE INPUT: Add pre-load checks asserting filter_input_channel >= 1 for all tflite models before passing to the runtime. 3. ISOLATE: Run TFLite inference in sandboxed worker processes so a FPE crash does not take down the parent service. 4. RESTRICT: Enforce allowlist-only model sources — no user-uploaded or unverified model files in production inference paths. 5. DETECT: Alert on abnormal inference process restarts; a spike in FPE-related crashes (SIGFPE) is an indicator of exploit attempts.
What systems are affected by CVE-2023-27579?
This vulnerability affects the following AI/ML architecture patterns: model serving, edge inference, training pipelines, MLOps CI/CD pipelines.
What is the CVSS score for CVE-2023-27579?
CVE-2023-27579 has a CVSS v3.1 base score of 7.5 (HIGH). The EPSS exploitation probability is 0.21%.
Technical Details
NVD Description
TensorFlow is an end-to-end open source platform for machine learning. Constructing a tflite model with a paramater `filter_input_channel` of less than 1 gives a FPE. This issue has been patched in version 2.12. TensorFlow will also cherrypick the fix commit on TensorFlow 2.11.1.
Exploitation Scenario
An adversary targets an ML-as-a-service endpoint that accepts custom .tflite model uploads (e.g., a mobile app backend or AutoML platform). They craft a minimal .tflite model with a convolution layer where filter_input_channel is set to 0. Uploading and triggering inference against this model causes an immediate FPE, crashing the inference worker. With no rate limiting, the adversary can loop this to keep the service in a crash-restart cycle, achieving sustained denial of service. No ML expertise required — the only knowledge needed is the tflite flatbuffer schema, which is publicly documented.
Weaknesses (CWE)
CVSS Vector
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H References
Timeline
Related Vulnerabilities
CVE-2020-15196 9.9 TensorFlow: heap OOB read in sparse/ragged count ops
Same package: tensorflow CVE-2020-15205 9.8 TensorFlow: heap overflow in StringNGrams, ASLR bypass
Same package: tensorflow CVE-2020-15208 9.8 TFLite: OOB read/write via tensor dimension mismatch
Same package: tensorflow CVE-2019-16778 9.8 TensorFlow: heap overflow in UnsortedSegmentSum op
Same package: tensorflow CVE-2022-23587 9.8 TensorFlow: integer overflow in Grappler enables RCE
Same package: tensorflow
AI Threat Alert