If your organization uses LiteLLM as an LLM gateway or proxy, assume any API keys (OpenAI, Anthropic, Azure, etc.) processed by versions before 1.44.12 are compromised and rotate them immediately. The defective masking means near-complete keys appear in plaintext in any log aggregator, SIEM, or cloud logging service with access to LiteLLM logs. Patch to 1.44.12 and audit who has had read access to application logs.
Risk Assessment
Despite a low EPSS score (0.00109), the business risk is disproportionately high for AI-heavy organizations. LiteLLM is a de-facto standard LLM gateway, and API keys logged in near-plaintext are accessible to anyone with log read permissions — a very broad attack surface in most enterprise environments (SIEM operators, DevOps teams, cloud logging services). A compromised LLM API key enables financial fraud (unbounded API spend), data exfiltration via the LLM provider, and lateral movement into other services sharing the same key. The attack is trivial once log access is obtained.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| litellm | pip | < 1.44.12 | 1.44.12 |
Do you use litellm? You're affected.
Severity & Risk
Attack Surface
Recommended Action
6 steps-
IMMEDIATE
Upgrade LiteLLM to >= 1.44.12.
-
IMMEDIATE
Rotate ALL API keys (OpenAI, Anthropic, Azure OpenAI, etc.) that were ever configured in the affected LiteLLM instance — treat them as fully compromised.
-
Audit log access: identify who has accessed application logs since LiteLLM deployment; check for anomalous API usage on provider dashboards.
-
Purge or restrict access to historical logs containing the leaked keys.
-
DETECTION
Search existing logs for patterns matching API key formats (e.g., 'sk-', 'ant-') to confirm exposure scope.
-
Implement log scrubbing/DLP controls at the log aggregation layer as defense-in-depth.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2024-9606?
If your organization uses LiteLLM as an LLM gateway or proxy, assume any API keys (OpenAI, Anthropic, Azure, etc.) processed by versions before 1.44.12 are compromised and rotate them immediately. The defective masking means near-complete keys appear in plaintext in any log aggregator, SIEM, or cloud logging service with access to LiteLLM logs. Patch to 1.44.12 and audit who has had read access to application logs.
Is CVE-2024-9606 actively exploited?
No confirmed active exploitation of CVE-2024-9606 has been reported, but organizations should still patch proactively.
How to fix CVE-2024-9606?
1. IMMEDIATE: Upgrade LiteLLM to >= 1.44.12. 2. IMMEDIATE: Rotate ALL API keys (OpenAI, Anthropic, Azure OpenAI, etc.) that were ever configured in the affected LiteLLM instance — treat them as fully compromised. 3. Audit log access: identify who has accessed application logs since LiteLLM deployment; check for anomalous API usage on provider dashboards. 4. Purge or restrict access to historical logs containing the leaked keys. 5. DETECTION: Search existing logs for patterns matching API key formats (e.g., 'sk-', 'ant-') to confirm exposure scope. 6. Implement log scrubbing/DLP controls at the log aggregation layer as defense-in-depth.
What systems are affected by CVE-2024-9606?
This vulnerability affects the following AI/ML architecture patterns: LLM API gateways, multi-provider LLM routing, agent frameworks, RAG pipelines, model serving, AI/ML CI/CD pipelines.
What is the CVSS score for CVE-2024-9606?
CVE-2024-9606 has a CVSS v3.1 base score of 7.5 (HIGH). The EPSS exploitation probability is 0.31%.
Technical Details
NVD Description
In berriai/litellm before version 1.44.12, the `litellm/litellm_core_utils/litellm_logging.py` file contains a vulnerability where the API key masking code only masks the first 5 characters of the key. This results in the leakage of almost the entire API key in the logs, exposing a significant amount of the secret key. The issue affects version v1.44.9.
Exploitation Scenario
An adversary with read access to application logs — via a compromised monitoring account, insider threat, or exposed log aggregator — searches LiteLLM log output for API key patterns. Because only the first 5 characters are masked (e.g., 'sk-ab***...rest-of-key-visible'), the attacker reconstructs the full key trivially. The attacker then uses the harvested OpenAI or Anthropic API key to exfiltrate data by querying the LLM with internal documents, run unauthorized model inference at the victim's expense, or pivot to other services using the same credential material. In agentic deployments, the compromised key may grant access to tool-calling capabilities with broad permissions.
Weaknesses (CWE)
CVSS Vector
CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N References
Timeline
Related Vulnerabilities
CVE-2026-42208 9.8 LiteLLM: SQL injection exposes LLM API credentials
Same package: litellm CVE-2026-35030 9.1 LiteLLM: auth bypass via JWT cache key collision
Same package: litellm CVE-2024-6825 8.8 LiteLLM: RCE via post_call_rules callback injection
Same package: litellm CVE-2026-40217 8.8 LiteLLM: RCE via bytecode rewriting in guardrails API
Same package: litellm CVE-2026-42271 8.8 LiteLLM: RCE via MCP test endpoint command injection
Same package: litellm
AI Threat Alert