CVE-2020-15206: TensorFlow: SavedModel protobuf DoS in inference serving

HIGH PoC AVAILABLE
Published September 25, 2020
CISO Take

If your org runs tensorflow-serving or any inference-as-a-service stack built on TensorFlow pre-2.3.1, an unauthenticated attacker who can supply a crafted SavedModel can crash your inference service with zero privileges required. Patch to TF 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1+ immediately and audit any externally-reachable model upload endpoints. Legacy MLOps pipelines that pin old TF versions are the highest-risk surface here.

Risk Assessment

CVSS 7.5 with AV:N/AC:L/PR:N/UI:N makes this trivially weaponizable from the network with no authentication. The blast radius is limited to availability (no confidentiality or integrity impact per CVSS), but for production AI inference infrastructure, DoS equates to direct revenue and operational impact. Not in CISA KEV and no confirmed active exploitation as of 2025, so residual risk is moderate for patched environments. However, organizations with pinned legacy TF versions in MLOps pipelines or air-gapped inference servers that lag patch cycles remain exposed.

Affected Systems

Package Ecosystem Vulnerable Range Patched
tensorflow pip No patch
195.0K OpenSSF 7.2 3.7K dependents Pushed 6d ago 4% patched ~1372d to patch Full package profile →
leap No patch

Severity & Risk

CVSS 3.1
7.5 / 10
EPSS
0.5%
chance of exploitation in 30 days
Higher than 65% of all CVEs
Exploitation Status
Exploit Available
Exploitation: MEDIUM
Sophistication
Moderate
Exploitation Confidence
medium
Public PoC indexed (trickest/cve)
Composite signal derived from CISA KEV, CISA SSVC, EPSS, trickest/cve, and Nuclei templates.

Attack Surface

AV AC PR UI S C I A
AV Network
AC Low
PR None
UI None
S Unchanged
C None
I None
A High

Recommended Action

6 steps
  1. Patch: Upgrade TensorFlow to 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1+. Verify with pip show tensorflow.

  2. Model validation: Implement cryptographic signing and signature verification for SavedModel artifacts in your model registry before loading. Reject unsigned or untrusted models.

  3. Access control: Restrict who can push models to your serving infrastructure — enforce model registry RBAC.

  4. Isolation: Run tensorflow-serving in containers with restart policies and resource limits to contain impact of crashes.

  5. Detection: Alert on abnormal tensorflow-serving process restarts or segfault signals in system logs.

  6. Network: If tf-serving is internet-facing, place it behind an authenticated API gateway — this vulnerability requires no auth at the TF level.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Article 15 - Accuracy, Robustness and Cybersecurity Article 9 - Risk Management System
ISO 42001
A.6.2.6 - AI System Robustness and Security A.8.4 - AI Data and Artifact Integrity
NIST AI RMF
MANAGE 2.2 - Mechanisms to Manage AI Risks MEASURE 2.6 - AI Risk Measurement — Security
OWASP LLM Top 10
LLM04 - Model Denial of Service

Frequently Asked Questions

What is CVE-2020-15206?

If your org runs tensorflow-serving or any inference-as-a-service stack built on TensorFlow pre-2.3.1, an unauthenticated attacker who can supply a crafted SavedModel can crash your inference service with zero privileges required. Patch to TF 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1+ immediately and audit any externally-reachable model upload endpoints. Legacy MLOps pipelines that pin old TF versions are the highest-risk surface here.

Is CVE-2020-15206 actively exploited?

Proof-of-concept exploit code is publicly available for CVE-2020-15206, increasing the risk of exploitation.

How to fix CVE-2020-15206?

1. Patch: Upgrade TensorFlow to 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1+. Verify with `pip show tensorflow`. 2. Model validation: Implement cryptographic signing and signature verification for SavedModel artifacts in your model registry before loading. Reject unsigned or untrusted models. 3. Access control: Restrict who can push models to your serving infrastructure — enforce model registry RBAC. 4. Isolation: Run tensorflow-serving in containers with restart policies and resource limits to contain impact of crashes. 5. Detection: Alert on abnormal tensorflow-serving process restarts or segfault signals in system logs. 6. Network: If tf-serving is internet-facing, place it behind an authenticated API gateway — this vulnerability requires no auth at the TF level.

What systems are affected by CVE-2020-15206?

This vulnerability affects the following AI/ML architecture patterns: model serving, inference pipelines, ML model registries, training pipelines.

What is the CVSS score for CVE-2020-15206?

CVE-2020-15206 has a CVSS v3.1 base score of 7.5 (HIGH). The EPSS exploitation probability is 0.47%.

Technical Details

NVD Description

In Tensorflow before versions 1.15.4, 2.0.3, 2.1.2, 2.2.1 and 2.3.1, changing the TensorFlow's `SavedModel` protocol buffer and altering the name of required keys results in segfaults and data corruption while loading the model. This can cause a denial of service in products using `tensorflow-serving` or other inference-as-a-service installments. Fixed were added in commits f760f88b4267d981e13f4b302c437ae800445968 and fcfef195637c6e365577829c4d67681695956e7d (both going into TensorFlow 2.2.0 and 2.3.0 but not yet backported to earlier versions). However, this was not enough, as #41097 reports a different failure mode. The issue is patched in commit adf095206f25471e864a8e63a0f1caef53a0e3a6, and is released in TensorFlow versions 1.15.4, 2.0.3, 2.1.2, 2.2.1, or 2.3.1.

Exploitation Scenario

An attacker with access to a model registry (compromised CI/CD credential, insider, or misconfigured public bucket) crafts a TensorFlow SavedModel with deliberately malformed protobuf keys — removing or renaming required fields. The malicious model is pushed to the production model registry and picked up by tensorflow-serving during a scheduled model refresh or hot-reload. On load, TF dereferences a null or corrupt pointer from the malformed protobuf, triggering a segfault that crashes the serving process. In a kubernetes environment without proper restart policies, this takes the inference endpoint offline; with restart policies, the attacker can trigger repeated crashes to sustain denial of service. No network authentication, no ML expertise — just knowledge of the SavedModel protobuf schema.

Weaknesses (CWE)

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H

Timeline

Published
September 25, 2020
Last Modified
November 21, 2024
First Seen
September 25, 2020

Related Vulnerabilities