CVE-2024-10188: litellm: unauthenticated DoS crashes LLM proxy server

GHSA-gw2q-qw9j-rgv7 HIGH
Published March 20, 2025
CISO Take

Any litellm deployment exposed to the network — including internal AI gateways — can be crashed by an unauthenticated attacker with a single crafted request, taking down all LLM routing for dependent applications. Patch to 1.53.1.dev1 immediately; if you cannot patch, place litellm behind an authenticated reverse proxy or WAF as a stopgap. Audit whether litellm endpoints are internet-reachable — many teams expose them naively during POC phases.

Risk Assessment

Effective risk is HIGH for any organization using litellm as an LLM gateway or proxy. CVSS 7.5 is accurate: no authentication, no user interaction, network-reachable, and low attack complexity make this trivially exploitable. EPSS (0.00129) is currently low suggesting no observed mass exploitation, but the attack is simple enough that any motivated actor can reproduce it from the public huntr disclosure. The blast radius is limited to availability — no confidentiality or integrity impact — but in AI-dependent workflows, a downed LLM proxy is a full service outage.

Affected Systems

Package Ecosystem Vulnerable Range Patched
litellm pip < 1.53.1.dev1 1.53.1.dev1
45.5K OpenSSF 6.2 4 dependents Pushed 6d ago 50% patched ~43d to patch Full package profile →

Do you use litellm? You're affected.

Severity & Risk

CVSS 3.1
7.5 / 10
EPSS
0.3%
chance of exploitation in 30 days
Higher than 50% of all CVEs
Exploitation Status
No known exploitation
Sophistication
Trivial

Attack Surface

AV AC PR UI S C I A
AV Network
AC Low
PR None
UI None
S Unchanged
C None
I None
A High

Recommended Action

6 steps
  1. PATCH

    Upgrade litellm to >= 1.53.1.dev1 immediately (commit 21156ff5).

  2. WORKAROUND (if patch not possible): Place litellm behind an authenticated reverse proxy (nginx + basic auth / mTLS) to eliminate unauthenticated access.

  3. NETWORK CONTROLS

    Ensure litellm is not internet-facing; restrict access to known internal CIDR ranges via firewall rules.

  4. RATE LIMITING

    Apply request rate limits at the proxy/WAF layer to reduce DoS surface even post-patch.

  5. MONITORING

    Alert on litellm process restarts or sudden spikes in 5xx errors from your LLM gateway — these are indicators of exploitation attempts.

  6. DETECTION

    Review logs for abnormally large or malformed request bodies to litellm endpoints.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Article 15 - Accuracy, robustness, and cybersecurity
ISO 42001
A.6.2.6 - AI system availability and resilience
NIST AI RMF
MANAGE-2.2 - Risks from third-party AI components
OWASP LLM Top 10
LLM04 - Model Denial of Service

Frequently Asked Questions

What is CVE-2024-10188?

Any litellm deployment exposed to the network — including internal AI gateways — can be crashed by an unauthenticated attacker with a single crafted request, taking down all LLM routing for dependent applications. Patch to 1.53.1.dev1 immediately; if you cannot patch, place litellm behind an authenticated reverse proxy or WAF as a stopgap. Audit whether litellm endpoints are internet-reachable — many teams expose them naively during POC phases.

Is CVE-2024-10188 actively exploited?

No confirmed active exploitation of CVE-2024-10188 has been reported, but organizations should still patch proactively.

How to fix CVE-2024-10188?

1. PATCH: Upgrade litellm to >= 1.53.1.dev1 immediately (commit 21156ff5). 2. WORKAROUND (if patch not possible): Place litellm behind an authenticated reverse proxy (nginx + basic auth / mTLS) to eliminate unauthenticated access. 3. NETWORK CONTROLS: Ensure litellm is not internet-facing; restrict access to known internal CIDR ranges via firewall rules. 4. RATE LIMITING: Apply request rate limits at the proxy/WAF layer to reduce DoS surface even post-patch. 5. MONITORING: Alert on litellm process restarts or sudden spikes in 5xx errors from your LLM gateway — these are indicators of exploitation attempts. 6. DETECTION: Review logs for abnormally large or malformed request bodies to litellm endpoints.

What systems are affected by CVE-2024-10188?

This vulnerability affects the following AI/ML architecture patterns: LLM proxy and gateway, agent frameworks, RAG pipelines, model serving, AI application backends.

What is the CVSS score for CVE-2024-10188?

CVE-2024-10188 has a CVSS v3.1 base score of 7.5 (HIGH). The EPSS exploitation probability is 0.27%.

Technical Details

NVD Description

A vulnerability in BerriAI/litellm, as of commit 26c03c9, allows unauthenticated users to cause a Denial of Service (DoS) by exploiting the use of ast.literal_eval to parse user input. This function is not safe and is prone to DoS attacks, which can crash the litellm Python server.

Exploitation Scenario

An adversary identifies a litellm endpoint via passive DNS, GitHub leaks, or internal network scanning. They craft an HTTP request to any litellm API endpoint with a payload specifically designed to cause ast.literal_eval to enter a resource-exhaustive evaluation loop — for example, a deeply nested structure or an expression that triggers excessive memory allocation. No credentials are required. The Python server process crashes or becomes unresponsive, taking offline all LLM-dependent services (AI agents, RAG queries, copilot features) routing through that litellm instance. In a CI/CD or automated AI pipeline context, this could silently stall batch enrichment jobs or break production inference without immediate human awareness.

CVSS Vector

CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H

Timeline

Published
March 20, 2025
Last Modified
March 20, 2025
First Seen
March 20, 2025

Related Vulnerabilities