CVE-2024-8984: litellm: unauthenticated DoS via multipart boundary parsing
GHSA-fh2c-86xm-pm2x HIGH CISA: TRACK*Any organization running LiteLLM as an LLM API gateway is exposed to a zero-authentication denial of service that can take down all AI service routing instantly. Upgrade to 1.56.2 immediately — the attack is trivial to script and requires no credentials or insider knowledge. If patching is blocked, place a WAF or reverse proxy in front that enforces multipart boundary length limits.
Risk Assessment
High effective risk despite moderate CVSS (7.5). The combination of zero authentication, zero user interaction, and network-accessible attack surface means any exposed LiteLLM instance is one HTTP request away from outage. EPSS is low (0.00202) indicating no current mass exploitation, but the simplicity of the technique makes it a realistic threat for targeted disruption. Organizations using LiteLLM as a centralized LLM proxy face amplified impact: a single DoS event disrupts all downstream AI-powered applications simultaneously.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| litellm | pip | < 1.56.2 | 1.56.2 |
Do you use litellm? You're affected.
Severity & Risk
Attack Surface
Recommended Action
6 steps-
PATCH
Upgrade litellm to >= 1.56.2 immediately (pip install --upgrade litellm).
-
WORKAROUND if patching is blocked: place a WAF or nginx/Caddy reverse proxy enforcing multipart boundary length limits (<= 70 chars per RFC 2046).
-
NETWORK CONTROL
Restrict LiteLLM endpoint access to known IP ranges or authenticated clients via API gateway.
-
RATE LIMITING
Implement request rate limiting per IP at the load balancer layer.
-
DETECTION
Alert on requests with multipart boundaries > 100 characters or with repeated dash sequences in Content-Type headers.
-
MONITORING
Watch for sudden CPU/memory spikes on LiteLLM processes as an indicator of active exploitation.
CISA SSVC Assessment
Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2024-8984?
Any organization running LiteLLM as an LLM API gateway is exposed to a zero-authentication denial of service that can take down all AI service routing instantly. Upgrade to 1.56.2 immediately — the attack is trivial to script and requires no credentials or insider knowledge. If patching is blocked, place a WAF or reverse proxy in front that enforces multipart boundary length limits.
Is CVE-2024-8984 actively exploited?
No confirmed active exploitation of CVE-2024-8984 has been reported, but organizations should still patch proactively.
How to fix CVE-2024-8984?
1. PATCH: Upgrade litellm to >= 1.56.2 immediately (pip install --upgrade litellm). 2. WORKAROUND if patching is blocked: place a WAF or nginx/Caddy reverse proxy enforcing multipart boundary length limits (<= 70 chars per RFC 2046). 3. NETWORK CONTROL: Restrict LiteLLM endpoint access to known IP ranges or authenticated clients via API gateway. 4. RATE LIMITING: Implement request rate limiting per IP at the load balancer layer. 5. DETECTION: Alert on requests with multipart boundaries > 100 characters or with repeated dash sequences in Content-Type headers. 6. MONITORING: Watch for sudden CPU/memory spikes on LiteLLM processes as an indicator of active exploitation.
What systems are affected by CVE-2024-8984?
This vulnerability affects the following AI/ML architecture patterns: LLM API gateways, model serving, agent frameworks, RAG pipelines, AI application backends.
What is the CVSS score for CVE-2024-8984?
CVE-2024-8984 has a CVSS v3.1 base score of 7.5 (HIGH). The EPSS exploitation probability is 0.64%.
Technical Details
NVD Description
A Denial of Service (DoS) vulnerability exists in berriai/litellm version v1.44.5. This vulnerability can be exploited by appending characters, such as dashes (-), to the end of a multipart boundary in an HTTP request. The server continuously processes each character, leading to excessive resource consumption and rendering the service unavailable. The issue is unauthenticated and does not require any user interaction, impacting all users of the service.
Exploitation Scenario
An adversary targeting an organization's AI infrastructure identifies a publicly accessible LiteLLM endpoint (common in internal developer platforms or SaaS AI products). They craft an HTTP POST request with a Content-Type header containing a multipart boundary padded with thousands of dash characters. The LiteLLM parser processes each character iteratively, spinning CPU cycles without throttling. A single attacker script sending this request in a loop saturates the server, making the LiteLLM proxy unresponsive. All downstream applications — AI copilots, RAG systems, agent workflows — fail simultaneously. No credentials, no prior access, no specialized ML knowledge required. Total time from discovery to outage: minutes.
Weaknesses (CWE)
CVSS Vector
CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H References
- github.com/BerriAI/litellm/blob/8c5ff150f6142608ffe968e4e68429f978fda187/litellm/tests/test_spend_logs.py
- github.com/BerriAI/litellm/commit/4f49f836aa844ac9b6bfbeff27e6f6b2b9cf3f61
- github.com/advisories/GHSA-fh2c-86xm-pm2x
- huntr.com/bounties/554fc76b-3097-4223-b4cf-110b853e9355
- nvd.nist.gov/vuln/detail/CVE-2024-8984
Timeline
Related Vulnerabilities
CVE-2026-42208 9.8 LiteLLM: SQL injection exposes LLM API credentials
Same package: litellm CVE-2026-35030 9.1 LiteLLM: auth bypass via JWT cache key collision
Same package: litellm CVE-2024-6825 8.8 LiteLLM: RCE via post_call_rules callback injection
Same package: litellm CVE-2026-40217 8.8 LiteLLM: RCE via bytecode rewriting in guardrails API
Same package: litellm CVE-2026-42271 8.8 LiteLLM: RCE via MCP test endpoint command injection
Same package: litellm
AI Threat Alert