CVE-2024-6587: LiteLLM: SSRF leaks OpenAI API key to attacker
HIGH ACTIVELY EXPLOITED PoC AVAILABLE CISA: TRACK*Any LiteLLM 1.38.10 deployment is one unauthenticated HTTP request away from losing its OpenAI API key — no privileges, no user interaction required. Patch immediately and rotate all OpenAI keys that were in use on affected instances. If patching is not possible today, block the api_base parameter at the network or application layer as an emergency workaround.
Risk Assessment
HIGH. CVSS 7.5 with network-accessible vector, zero authentication barrier, and trivial exploitation makes this immediately actionable. The impact is full OpenAI API key compromise, enabling financial harm (unbounded API charges billed to the victim), access to GPT-4o and organization-level LLM resources, and potential exfiltration if the key has Assistants API or fine-tune permissions. LiteLLM's widespread adoption as an enterprise LLM gateway amplifies exposure significantly across AI-heavy organizations.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| litellm | pip | — | No patch |
Do you use litellm? You're affected.
Severity & Risk
Attack Surface
Recommended Action
5 steps-
PATCH
Upgrade LiteLLM to a version at or after commit ba1912afd1b19e38d3704bb156adf887f91ae1e0.
-
ROTATE
Immediately rotate all OpenAI (and any other provider) API keys configured in affected LiteLLM instances — assume compromise if the endpoint was internet-accessible.
-
RESTRICT
As an emergency workaround, block user-supplied api_base in request validation middleware or deploy a WAF rule rejecting api_base in /chat/completions payloads.
-
DETECT
Review outbound HTTP logs from LiteLLM hosts for requests to non-sanctioned LLM provider domains and unexpected Authorization header destinations.
-
MONITOR
Enable OpenAI usage alerts for anomalous consumption spikes that would indicate stolen-key abuse.
CISA SSVC Assessment
Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2024-6587?
Any LiteLLM 1.38.10 deployment is one unauthenticated HTTP request away from losing its OpenAI API key — no privileges, no user interaction required. Patch immediately and rotate all OpenAI keys that were in use on affected instances. If patching is not possible today, block the api_base parameter at the network or application layer as an emergency workaround.
Is CVE-2024-6587 actively exploited?
Yes, CVE-2024-6587 is confirmed actively exploited and listed in CISA Known Exploited Vulnerabilities catalog.
How to fix CVE-2024-6587?
1. PATCH: Upgrade LiteLLM to a version at or after commit ba1912afd1b19e38d3704bb156adf887f91ae1e0. 2. ROTATE: Immediately rotate all OpenAI (and any other provider) API keys configured in affected LiteLLM instances — assume compromise if the endpoint was internet-accessible. 3. RESTRICT: As an emergency workaround, block user-supplied api_base in request validation middleware or deploy a WAF rule rejecting api_base in /chat/completions payloads. 4. DETECT: Review outbound HTTP logs from LiteLLM hosts for requests to non-sanctioned LLM provider domains and unexpected Authorization header destinations. 5. MONITOR: Enable OpenAI usage alerts for anomalous consumption spikes that would indicate stolen-key abuse.
What systems are affected by CVE-2024-6587?
This vulnerability affects the following AI/ML architecture patterns: LLM API gateways, Multi-provider LLM routing, Agent frameworks, RAG pipelines, AI development platforms.
What is the CVSS score for CVE-2024-6587?
CVE-2024-6587 has a CVSS v3.1 base score of 7.5 (HIGH). The EPSS exploitation probability is 88.37%.
Technical Details
NVD Description
A Server-Side Request Forgery (SSRF) vulnerability exists in berriai/litellm version 1.38.10. This vulnerability allows users to specify the `api_base` parameter when making requests to `POST /chat/completions`, causing the application to send the request to the domain specified by `api_base`. This request includes the OpenAI API key. A malicious user can set the `api_base` to their own domain and intercept the OpenAI API key, leading to unauthorized access and potential misuse of the API key.
Exploitation Scenario
An adversary with no credentials sends a single POST to /chat/completions on an internet-exposed LiteLLM endpoint, setting api_base to an attacker-controlled server (e.g., https://attacker.io/harvest). LiteLLM proxies the request to that server, forwarding the Authorization header containing the raw OpenAI API key. The attacker captures the key in seconds. Follow-on actions: (1) run GPU-intensive workloads billed to the victim, (2) enumerate org-level resources via the OpenAI API, (3) access or exfiltrate content from Assistants threads if org-scoped, or (4) resell the key. Total attacker skill required: able to use curl.
Weaknesses (CWE)
CVSS Vector
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N References
- github.com/berriai/litellm/commit/ba1912afd1b19e38d3704bb156adf887f91ae1e0 Patch
- huntr.com/bounties/4001e1a2-7b7a-4776-a3ae-e6692ec3d997 Exploit 3rd Party
- github.com/ARPSyndicate/cve-scores Exploit
- github.com/fkie-cad/nvd-json-data-feeds Exploit
- github.com/lambdasawa/_lambdasawa Exploit
- github.com/lambdasawa/lambdasawa Exploit
Timeline
Related Vulnerabilities
CVE-2026-42208 9.8 LiteLLM: SQL injection exposes LLM API credentials
Same package: litellm CVE-2026-35030 9.1 LiteLLM: auth bypass via JWT cache key collision
Same package: litellm CVE-2024-6825 8.8 LiteLLM: RCE via post_call_rules callback injection
Same package: litellm CVE-2026-40217 8.8 LiteLLM: RCE via bytecode rewriting in guardrails API
Same package: litellm CVE-2026-42271 8.8 LiteLLM: RCE via MCP test endpoint command injection
Same package: litellm
AI Threat Alert