CVE-2022-36016: TensorFlow: CHECK-fail assertion crashes model serving

HIGH
Published September 16, 2022
CISO Take

A network-reachable assertion failure in TensorFlow's type inference allows unauthenticated attackers to crash any exposed TensorFlow serving process with a single malformed request. If you run TF Serving or any TensorFlow-based inference endpoint (including cloud-hosted), patch to TF 2.10.0 / 2.9.1 / 2.8.1 / 2.7.2 immediately. No workaround exists — patching is the only remediation.

Risk Assessment

HIGH severity in practice for organizations exposing TensorFlow inference APIs. CVSS 7.5 with AV:N/AC:L/PR:N/UI:N means any internet-facing TF endpoint is reachable with no authentication and trivial effort. The availability-only impact (A:H) limits blast radius — no data exfiltration or code execution — but a persistent crash loop against a production ML serving cluster causes full service outage. Risk is amplified in AI-native organizations where TF Serving is a critical dependency in real-time inference pipelines.

Affected Systems

Package Ecosystem Vulnerable Range Patched
tensorflow pip No patch
195.0K OpenSSF 7.2 3.7K dependents Pushed today 4% patched ~1372d to patch Full package profile →

Do you use tensorflow? You're affected.

Severity & Risk

CVSS 3.1
7.5 / 10
EPSS
0.2%
chance of exploitation in 30 days
Higher than 41% of all CVEs
Exploitation Status
No known exploitation
Sophistication
Trivial

Attack Surface

AV AC PR UI S C I A
AV Network
AC Low
PR None
UI None
S Unchanged
C None
I None
A High

Recommended Action

5 steps
  1. PATCH

    Upgrade to TensorFlow 2.10.0, or apply cherrypick patches to 2.9.1, 2.8.1, or 2.7.2. Commit 6104f0d4091c260ce9352f9155f7e9b725eab012 is the fix.

  2. ISOLATE

    Place TF Serving instances behind API gateways with authentication — eliminate unauthenticated public access to inference endpoints.

  3. RATE LIMIT

    Deploy request rate limiting and input validation at the API gateway layer to reduce crash-loop risk while patching is in progress.

  4. DETECT

    Monitor TF Serving process crash rates and unexpected restarts; alert on repeated process exits within short windows.

  5. INVENTORY

    Audit all internal and external TF model serving endpoints to identify exposure surface before patching.

CISA SSVC Assessment

Decision Track
Exploitation none
Automatable No
Technical Impact partial

Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Art. 15 - Accuracy, robustness and cybersecurity
ISO 42001
A.6.2.5 - AI system availability and resilience
NIST AI RMF
MANAGE 2.4 - Residual risks from AI systems are managed
OWASP LLM Top 10
LLM04 - Model Denial of Service

Frequently Asked Questions

What is CVE-2022-36016?

A network-reachable assertion failure in TensorFlow's type inference allows unauthenticated attackers to crash any exposed TensorFlow serving process with a single malformed request. If you run TF Serving or any TensorFlow-based inference endpoint (including cloud-hosted), patch to TF 2.10.0 / 2.9.1 / 2.8.1 / 2.7.2 immediately. No workaround exists — patching is the only remediation.

Is CVE-2022-36016 actively exploited?

No confirmed active exploitation of CVE-2022-36016 has been reported, but organizations should still patch proactively.

How to fix CVE-2022-36016?

1. PATCH: Upgrade to TensorFlow 2.10.0, or apply cherrypick patches to 2.9.1, 2.8.1, or 2.7.2. Commit 6104f0d4091c260ce9352f9155f7e9b725eab012 is the fix. 2. ISOLATE: Place TF Serving instances behind API gateways with authentication — eliminate unauthenticated public access to inference endpoints. 3. RATE LIMIT: Deploy request rate limiting and input validation at the API gateway layer to reduce crash-loop risk while patching is in progress. 4. DETECT: Monitor TF Serving process crash rates and unexpected restarts; alert on repeated process exits within short windows. 5. INVENTORY: Audit all internal and external TF model serving endpoints to identify exposure surface before patching.

What systems are affected by CVE-2022-36016?

This vulnerability affects the following AI/ML architecture patterns: model serving, training pipelines, inference infrastructure, MLOps pipelines.

What is the CVSS score for CVE-2022-36016?

CVE-2022-36016 has a CVSS v3.1 base score of 7.5 (HIGH). The EPSS exploitation probability is 0.19%.

Technical Details

NVD Description

TensorFlow is an open source platform for machine learning. When `tensorflow::full_type::SubstituteFromAttrs` receives a `FullTypeDef& t` that is not exactly three args, it triggers a `CHECK`-fail instead of returning a status. We have patched the issue in GitHub commit 6104f0d4091c260ce9352f9155f7e9b725eab012. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as these are also affected and still in supported range. There are no known workarounds for this issue.

Exploitation Scenario

An adversary identifies a production ML inference API powered by TensorFlow Serving (e.g., via HTTP headers, error messages, or open-source intelligence). Without authentication, they craft a malformed inference request that triggers the `SubstituteFromAttrs` code path with an unexpected number of type arguments. The TF process hits the CHECK-fail assertion and crashes. Automated process restarts (common in Kubernetes deployments) bring the service back up, allowing the attacker to repeat the crash in a loop — effectively maintaining a persistent denial of service against the ML inference layer. For organizations using AI models in fraud detection, content moderation, or real-time decisioning, this translates directly into business disruption.

Weaknesses (CWE)

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H

Timeline

Published
September 16, 2022
Last Modified
November 21, 2024
First Seen
September 16, 2022

Related Vulnerabilities