CVE-2025-0330: LiteLLM: Langfuse API key leak via error handling
GHSA-879v-fggm-vxw2 HIGH CISA: TRACK*LiteLLM proxy leaks Langfuse API credentials (secret and public keys) in error responses when team settings fail to parse — no authentication required to trigger. Any deployment of LiteLLM <= 1.52.1 with Langfuse integration is exposed: an attacker gains full access to your Langfuse project, including every LLM prompt and response ever logged. Upgrade immediately and rotate all Langfuse API keys; treat exposed keys as fully compromised.
Risk Assessment
High risk for organizations running LiteLLM as an LLM gateway with Langfuse observability. CVSS 7.5 with network-accessible, zero-auth, zero-interaction vector makes this trivially exploitable. EPSS (0.00133) suggests limited active exploitation at time of disclosure, but LiteLLM is a widely deployed enterprise AI proxy, amplifying blast radius. The leaked credentials grant persistent read/write access to Langfuse — which stores the full history of LLM requests, potentially including PII, proprietary prompts, and internal tool outputs.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| litellm | pip | <= 1.52.1 | No patch |
Do you use litellm? You're affected.
Severity & Risk
Attack Surface
Recommended Action
6 steps-
PATCH
Upgrade LiteLLM beyond v1.52.1; verify fix is included in release notes before deploying.
-
ROTATE
Immediately rotate langfuse_secret and langfuse_public_key in all environments — assume any key configured in an affected version is compromised.
-
AUDIT
Review LiteLLM error logs and application logs for instances where team settings parsing failed; correlate timestamps with unauthorized Langfuse API activity.
-
SCOPE
Inventory all LiteLLM deployments across environments (dev/staging/prod); apply remediation consistently.
-
DETECT
Add alerting on Langfuse API key usage from unexpected IPs or at unusual times as a compensating control.
-
HARDEN
Ensure LiteLLM proxy error responses are not surfaced to end users or logged to external systems verbatim.
CISA SSVC Assessment
Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2025-0330?
LiteLLM proxy leaks Langfuse API credentials (secret and public keys) in error responses when team settings fail to parse — no authentication required to trigger. Any deployment of LiteLLM <= 1.52.1 with Langfuse integration is exposed: an attacker gains full access to your Langfuse project, including every LLM prompt and response ever logged. Upgrade immediately and rotate all Langfuse API keys; treat exposed keys as fully compromised.
Is CVE-2025-0330 actively exploited?
No confirmed active exploitation of CVE-2025-0330 has been reported, but organizations should still patch proactively.
How to fix CVE-2025-0330?
1. PATCH: Upgrade LiteLLM beyond v1.52.1; verify fix is included in release notes before deploying. 2. ROTATE: Immediately rotate langfuse_secret and langfuse_public_key in all environments — assume any key configured in an affected version is compromised. 3. AUDIT: Review LiteLLM error logs and application logs for instances where team settings parsing failed; correlate timestamps with unauthorized Langfuse API activity. 4. SCOPE: Inventory all LiteLLM deployments across environments (dev/staging/prod); apply remediation consistently. 5. DETECT: Add alerting on Langfuse API key usage from unexpected IPs or at unusual times as a compensating control. 6. HARDEN: Ensure LiteLLM proxy error responses are not surfaced to end users or logged to external systems verbatim.
What systems are affected by CVE-2025-0330?
This vulnerability affects the following AI/ML architecture patterns: LLM proxy and gateway deployments, AI observability and tracing pipelines, Multi-tenant LLM infrastructure, LLM inference infrastructure.
What is the CVSS score for CVE-2025-0330?
CVE-2025-0330 has a CVSS v3.1 base score of 7.5 (HIGH). The EPSS exploitation probability is 0.48%.
Technical Details
NVD Description
In berriai/litellm version v1.52.1, an issue in proxy_server.py causes the leakage of Langfuse API keys when an error occurs while parsing team settings. This vulnerability exposes sensitive information, including langfuse_secret and langfuse_public_key, which can provide full access to the Langfuse project storing all requests.
Exploitation Scenario
An adversary identifies a target organization running LiteLLM as their centralized AI gateway (e.g., via job postings mentioning LiteLLM, or open proxy endpoints). They craft an HTTP request to the LiteLLM proxy that triggers a team settings parsing error — this requires no credentials and can be done remotely. The resulting error response or log entry contains the raw langfuse_secret and langfuse_public_key in plaintext. The attacker uses these keys to authenticate directly to the Langfuse API, downloading the organization's full LLM interaction history: internal prompts revealing business logic, customer PII, RAG query content, and tool call parameters. With write access, they could also inject false traces to manipulate monitoring dashboards or corrupt evaluation baselines used for model governance.
Weaknesses (CWE)
CVSS Vector
CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N References
Timeline
Related Vulnerabilities
CVE-2026-42208 9.8 LiteLLM: SQL injection exposes LLM API credentials
Same package: litellm CVE-2026-35030 9.1 LiteLLM: auth bypass via JWT cache key collision
Same package: litellm CVE-2024-6825 8.8 LiteLLM: RCE via post_call_rules callback injection
Same package: litellm CVE-2026-40217 8.8 LiteLLM: RCE via bytecode rewriting in guardrails API
Same package: litellm CVE-2026-42271 8.8 LiteLLM: RCE via MCP test endpoint command injection
Same package: litellm
AI Threat Alert