CVE-2022-23557: TensorFlow TFLite: DoS via divide-by-zero in BiasAndClamp
MEDIUM PoC AVAILABLE CISA: TRACK*Any TFLite inference endpoint that accepts externally-supplied model files is vulnerable to a remotely-triggered crash. An authenticated attacker (low privilege) can send a crafted TFLite model with zero bias_size to halt the inference service. Patch immediately to TF 2.8.0, 2.7.1, 2.6.3, or 2.5.3, and block untrusted model uploads at the perimeter.
Risk Assessment
Medium severity but operationally impactful for production AI services. CVSS 6.5 with network-accessible, low-complexity exploitation requiring only low privileges — no user interaction needed. The impact is purely availability (no data exfiltration or code execution), but a crash loop on an inference server can halt business-critical AI pipelines. Edge/mobile TFLite deployments are less exposed if model files are static. Risk elevates significantly for any model-as-a-service or platform accepting user-submitted TFLite models.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| tensorflow | pip | — | No patch |
Do you use tensorflow? You're affected.
Severity & Risk
Attack Surface
Recommended Action
5 steps-
PATCH
Upgrade to TensorFlow 2.8.0; or apply cherrypicks to 2.7.1, 2.6.3, or 2.5.3. Commit 8c6f391a2282684a25cbfec7687bd5d35261a209 contains the fix.
-
VALIDATE INPUT
If upgrading is not immediately feasible, reject TFLite model uploads from untrusted sources at API gateway level.
-
SIGN MODELS
Implement model artifact signing — only execute models with validated provenance.
-
ISOLATE INFERENCE
Run TFLite inference in sandboxed processes; a crash should not cascade to the serving layer.
-
MONITOR
Alert on repeated inference service restarts or abnormal process exits, which may indicate exploitation attempts.
CISA SSVC Assessment
Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2022-23557?
Any TFLite inference endpoint that accepts externally-supplied model files is vulnerable to a remotely-triggered crash. An authenticated attacker (low privilege) can send a crafted TFLite model with zero bias_size to halt the inference service. Patch immediately to TF 2.8.0, 2.7.1, 2.6.3, or 2.5.3, and block untrusted model uploads at the perimeter.
Is CVE-2022-23557 actively exploited?
Proof-of-concept exploit code is publicly available for CVE-2022-23557, increasing the risk of exploitation.
How to fix CVE-2022-23557?
1. PATCH: Upgrade to TensorFlow 2.8.0; or apply cherrypicks to 2.7.1, 2.6.3, or 2.5.3. Commit 8c6f391a2282684a25cbfec7687bd5d35261a209 contains the fix. 2. VALIDATE INPUT: If upgrading is not immediately feasible, reject TFLite model uploads from untrusted sources at API gateway level. 3. SIGN MODELS: Implement model artifact signing — only execute models with validated provenance. 4. ISOLATE INFERENCE: Run TFLite inference in sandboxed processes; a crash should not cascade to the serving layer. 5. MONITOR: Alert on repeated inference service restarts or abnormal process exits, which may indicate exploitation attempts.
What systems are affected by CVE-2022-23557?
This vulnerability affects the following AI/ML architecture patterns: model serving, inference pipelines, edge AI deployments, MLOps platforms.
What is the CVSS score for CVE-2022-23557?
CVE-2022-23557 has a CVSS v3.1 base score of 6.5 (MEDIUM). The EPSS exploitation probability is 0.22%.
Technical Details
NVD Description
Tensorflow is an Open Source Machine Learning Framework. An attacker can craft a TFLite model that would trigger a division by zero in `BiasAndClamp` implementation. There is no check that the `bias_size` is non zero. The fix will be included in TensorFlow 2.8.0. We will also cherrypick this commit on TensorFlow 2.7.1, TensorFlow 2.6.3, and TensorFlow 2.5.3, as these are also affected and still in supported range.
Exploitation Scenario
Attacker with a low-privilege account on a TFLite model-serving platform (e.g., an MLOps API, a federated learning hub, or a cloud inference endpoint) crafts a TFLite flatbuffer where a Conv2D or similar layer references a BiasAndClamp operation with bias_size set to zero. The attacker submits this model via the inference or upload API. When the server attempts to run inference, BiasAndClamp performs a division by zero, triggering an unhandled exception or SIGFPE that crashes the inference worker. With no rate limiting, the attacker can sustain a denial-of-service loop by repeatedly re-submitting the crafted model, effectively making the inference service unavailable.
Weaknesses (CWE)
CVSS Vector
CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H References
- github.com/tensorflow/tensorflow/blob/5100e359aef5c8021f2e71c7b986420b85ce7b3d/tensorflow/lite/kernels/internal/common.h Exploit 3rd Party
- github.com/tensorflow/tensorflow/commit/8c6f391a2282684a25cbfec7687bd5d35261a209 Patch 3rd Party
- github.com/tensorflow/tensorflow/security/advisories/GHSA-gf2j-f278-xh4v Patch 3rd Party
Timeline
Related Vulnerabilities
CVE-2020-15196 9.9 TensorFlow: heap OOB read in sparse/ragged count ops
Same package: tensorflow CVE-2020-15205 9.8 TensorFlow: heap overflow in StringNGrams, ASLR bypass
Same package: tensorflow CVE-2020-15208 9.8 TFLite: OOB read/write via tensor dimension mismatch
Same package: tensorflow CVE-2019-16778 9.8 TensorFlow: heap overflow in UnsortedSegmentSum op
Same package: tensorflow CVE-2022-23587 9.8 TensorFlow: integer overflow in Grappler enables RCE
Same package: tensorflow
AI Threat Alert