CVE-2025-45809: LiteLLM: SQL injection in key management API
MEDIUM PoC AVAILABLE CISA: TRACK*LiteLLM is a widely-deployed LLM proxy that centralizes API keys for OpenAI, Anthropic, and other providers — making its database a high-value target. This SQL injection in the key block/unblock endpoints could allow an attacker to extract stored provider API keys, enabling unauthorized LLM usage billed to your organization. Upgrade to 1.81.0+ immediately and restrict these endpoints to trusted networks as a compensating control.
Risk Assessment
CVSS 5.4 understates operational risk. While User Interaction Required limits automated exploitation, the target database contains LLM provider API keys (OpenAI, Anthropic, Azure, etc.) worth thousands in potential abuse. Organizations running LiteLLM as a shared gateway amplify the blast radius — a single successful SQLi could expose credentials for every downstream AI service. Low attack complexity means exploitation requires no specialized AI/ML knowledge once an admin is socially engineered.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| litellm | pip | — | No patch |
Do you use litellm? You're affected.
Severity & Risk
Attack Surface
Recommended Action
6 steps-
Patch immediately: upgrade LiteLLM to >= 1.81.0.
-
If patching is delayed, restrict /key/block and /key/unblock endpoints to admin-only networks via firewall or reverse proxy ACLs.
-
Rotate all LLM provider API keys stored in LiteLLM as a precaution if any exposure window existed.
-
Enable database query logging and audit for anomalous SQL patterns targeting the keys table.
-
Review LiteLLM access logs for unexpected calls to /key/block or /key/unblock from untrusted IPs.
-
Implement WAF rules for SQLi patterns on these endpoints as a defense-in-depth measure.
CISA SSVC Assessment
Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2025-45809?
LiteLLM is a widely-deployed LLM proxy that centralizes API keys for OpenAI, Anthropic, and other providers — making its database a high-value target. This SQL injection in the key block/unblock endpoints could allow an attacker to extract stored provider API keys, enabling unauthorized LLM usage billed to your organization. Upgrade to 1.81.0+ immediately and restrict these endpoints to trusted networks as a compensating control.
Is CVE-2025-45809 actively exploited?
Proof-of-concept exploit code is publicly available for CVE-2025-45809, increasing the risk of exploitation.
How to fix CVE-2025-45809?
1. Patch immediately: upgrade LiteLLM to >= 1.81.0. 2. If patching is delayed, restrict /key/block and /key/unblock endpoints to admin-only networks via firewall or reverse proxy ACLs. 3. Rotate all LLM provider API keys stored in LiteLLM as a precaution if any exposure window existed. 4. Enable database query logging and audit for anomalous SQL patterns targeting the keys table. 5. Review LiteLLM access logs for unexpected calls to /key/block or /key/unblock from untrusted IPs. 6. Implement WAF rules for SQLi patterns on these endpoints as a defense-in-depth measure.
What systems are affected by CVE-2025-45809?
This vulnerability affects the following AI/ML architecture patterns: LLM API gateways, Multi-provider LLM proxy deployments, RAG pipelines, Agent frameworks, Model serving, Multi-tenant AI platforms.
What is the CVSS score for CVE-2025-45809?
CVE-2025-45809 has a CVSS v3.1 base score of 5.4 (MEDIUM). The EPSS exploitation probability is 0.23%.
Technical Details
NVD Description
SQL Injection vulnerability in BerriAI LiteLLM before 1.81.0 allows attackers to execute arbitrary commands via the key parameter to the "/key/block" and "/key/unblock" API endpoints.
Exploitation Scenario
An attacker identifies an organization running LiteLLM as their LLM gateway via job postings or GitHub repos. They craft a phishing email to a developer or admin with a link that triggers a GET/POST request to /key/block with a SQLi payload in the key parameter (e.g., a CSRF-style attack or a link the user clicks). The injected SQL query extracts API keys from the database — potentially yielding OpenAI, Anthropic, or Azure OpenAI keys with full spend limits. The attacker then uses exfiltrated keys directly against provider APIs for high-volume LLM queries, generating significant costs for the victim organization and potentially accessing proprietary prompts or conversation histories stored in the LiteLLM backend.
Weaknesses (CWE)
CVSS Vector
CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:L/I:L/A:N References
- github.com/shadia0/Patienc/blob/main/litellm/SQL_injection.md Exploit Mitigation 3rd Party
- huntr.com/bounties/3e6e4d40-b06a-4f54-a3ed-cc93584b12f3
Timeline
Related Vulnerabilities
CVE-2026-42208 9.8 LiteLLM: SQL injection exposes LLM API credentials
Same package: litellm CVE-2026-35030 9.1 LiteLLM: auth bypass via JWT cache key collision
Same package: litellm CVE-2024-6825 8.8 LiteLLM: RCE via post_call_rules callback injection
Same package: litellm CVE-2026-40217 8.8 LiteLLM: RCE via bytecode rewriting in guardrails API
Same package: litellm CVE-2026-42271 8.8 LiteLLM: RCE via MCP test endpoint command injection
Same package: litellm
AI Threat Alert