Any litellm deployment exposed to the network — including internal AI gateways — can be crashed by an unauthenticated attacker with a single crafted request, taking down all LLM routing for dependent applications. Patch to 1.53.1.dev1 immediately; if you cannot patch, place litellm behind an authenticated reverse proxy or WAF as a stopgap. Audit whether litellm endpoints are internet-reachable — many teams expose them naively during POC phases.
Risk Assessment
Effective risk is HIGH for any organization using litellm as an LLM gateway or proxy. CVSS 7.5 is accurate: no authentication, no user interaction, network-reachable, and low attack complexity make this trivially exploitable. EPSS (0.00129) is currently low suggesting no observed mass exploitation, but the attack is simple enough that any motivated actor can reproduce it from the public huntr disclosure. The blast radius is limited to availability — no confidentiality or integrity impact — but in AI-dependent workflows, a downed LLM proxy is a full service outage.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| litellm | pip | < 1.53.1.dev1 | 1.53.1.dev1 |
Do you use litellm? You're affected.
Severity & Risk
Attack Surface
Recommended Action
6 steps-
PATCH
Upgrade litellm to >= 1.53.1.dev1 immediately (commit 21156ff5).
-
WORKAROUND (if patch not possible): Place litellm behind an authenticated reverse proxy (nginx + basic auth / mTLS) to eliminate unauthenticated access.
-
NETWORK CONTROLS
Ensure litellm is not internet-facing; restrict access to known internal CIDR ranges via firewall rules.
-
RATE LIMITING
Apply request rate limits at the proxy/WAF layer to reduce DoS surface even post-patch.
-
MONITORING
Alert on litellm process restarts or sudden spikes in 5xx errors from your LLM gateway — these are indicators of exploitation attempts.
-
DETECTION
Review logs for abnormally large or malformed request bodies to litellm endpoints.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2024-10188?
Any litellm deployment exposed to the network — including internal AI gateways — can be crashed by an unauthenticated attacker with a single crafted request, taking down all LLM routing for dependent applications. Patch to 1.53.1.dev1 immediately; if you cannot patch, place litellm behind an authenticated reverse proxy or WAF as a stopgap. Audit whether litellm endpoints are internet-reachable — many teams expose them naively during POC phases.
Is CVE-2024-10188 actively exploited?
No confirmed active exploitation of CVE-2024-10188 has been reported, but organizations should still patch proactively.
How to fix CVE-2024-10188?
1. PATCH: Upgrade litellm to >= 1.53.1.dev1 immediately (commit 21156ff5). 2. WORKAROUND (if patch not possible): Place litellm behind an authenticated reverse proxy (nginx + basic auth / mTLS) to eliminate unauthenticated access. 3. NETWORK CONTROLS: Ensure litellm is not internet-facing; restrict access to known internal CIDR ranges via firewall rules. 4. RATE LIMITING: Apply request rate limits at the proxy/WAF layer to reduce DoS surface even post-patch. 5. MONITORING: Alert on litellm process restarts or sudden spikes in 5xx errors from your LLM gateway — these are indicators of exploitation attempts. 6. DETECTION: Review logs for abnormally large or malformed request bodies to litellm endpoints.
What systems are affected by CVE-2024-10188?
This vulnerability affects the following AI/ML architecture patterns: LLM proxy and gateway, agent frameworks, RAG pipelines, model serving, AI application backends.
What is the CVSS score for CVE-2024-10188?
CVE-2024-10188 has a CVSS v3.1 base score of 7.5 (HIGH). The EPSS exploitation probability is 0.27%.
Technical Details
NVD Description
A vulnerability in BerriAI/litellm, as of commit 26c03c9, allows unauthenticated users to cause a Denial of Service (DoS) by exploiting the use of ast.literal_eval to parse user input. This function is not safe and is prone to DoS attacks, which can crash the litellm Python server.
Exploitation Scenario
An adversary identifies a litellm endpoint via passive DNS, GitHub leaks, or internal network scanning. They craft an HTTP request to any litellm API endpoint with a payload specifically designed to cause ast.literal_eval to enter a resource-exhaustive evaluation loop — for example, a deeply nested structure or an expression that triggers excessive memory allocation. No credentials are required. The Python server process crashes or becomes unresponsive, taking offline all LLM-dependent services (AI agents, RAG queries, copilot features) routing through that litellm instance. In a CI/CD or automated AI pipeline context, this could silently stall batch enrichment jobs or break production inference without immediate human awareness.
Weaknesses (CWE)
CVSS Vector
CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H References
Timeline
Related Vulnerabilities
CVE-2026-42208 9.8 LiteLLM: SQL injection exposes LLM API credentials
Same package: litellm CVE-2026-35030 9.1 LiteLLM: auth bypass via JWT cache key collision
Same package: litellm CVE-2024-6825 8.8 LiteLLM: RCE via post_call_rules callback injection
Same package: litellm CVE-2026-40217 8.8 LiteLLM: RCE via bytecode rewriting in guardrails API
Same package: litellm CVE-2026-42271 8.8 LiteLLM: RCE via MCP test endpoint command injection
Same package: litellm
AI Threat Alert