CVE-2026-4993: OpenUI: hard-coded LiteLLM master key credential leak
LOW CISA: TRACK*wandb OpenUI hard-codes the LITELLM_MASTER_KEY in config.py, giving any local user with read access to the file full control over the LiteLLM proxy — and by extension, every LLM backend it fronts (OpenAI, Anthropic, Azure, etc.). Rotate your LiteLLM master key immediately and ensure it is injected via environment variable or secrets manager, not baked into source. Audit your container and CI images for this file if OpenUI is part of any shared dev or MLOps environment.
What is the risk?
CVSS 3.3 (Low) reflects the local-only attack vector, but contextual risk for AI teams is higher than the score suggests. LiteLLM master keys grant administrative access to all proxied LLM endpoints — meaning cost harvesting, model abuse, and data exfiltration through the proxy are all unlocked with a single credential theft. In containerized MLOps pipelines or shared dev servers, 'local access' is a low bar. No active exploitation reported; exploit details are public on GitHub Gist.
Severity & Risk
Attack Surface
What should I do?
5 steps-
Immediately rotate the LITELLM_MASTER_KEY on all affected deployments and revoke any derived virtual keys.
-
Remove hard-coded values from config.py; inject the key via environment variable (LITELLM_MASTER_KEY env var) or a secrets manager (Vault, AWS Secrets Manager, Doppler).
-
Audit container images and CI/CD pipelines that bake OpenUI — rebuild if config.py was included in layers.
-
Enable LiteLLM spend tracking and audit logs to detect anomalous API usage that may indicate prior exploitation.
-
Pin to a patched version of OpenUI once available; monitor the upstream repo for a fix given the vendor did not respond to disclosure.
CISA SSVC Assessment
Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2026-4993?
wandb OpenUI hard-codes the LITELLM_MASTER_KEY in config.py, giving any local user with read access to the file full control over the LiteLLM proxy — and by extension, every LLM backend it fronts (OpenAI, Anthropic, Azure, etc.). Rotate your LiteLLM master key immediately and ensure it is injected via environment variable or secrets manager, not baked into source. Audit your container and CI images for this file if OpenUI is part of any shared dev or MLOps environment.
Is CVE-2026-4993 actively exploited?
No confirmed active exploitation of CVE-2026-4993 has been reported, but organizations should still patch proactively.
How to fix CVE-2026-4993?
1. Immediately rotate the LITELLM_MASTER_KEY on all affected deployments and revoke any derived virtual keys. 2. Remove hard-coded values from config.py; inject the key via environment variable (LITELLM_MASTER_KEY env var) or a secrets manager (Vault, AWS Secrets Manager, Doppler). 3. Audit container images and CI/CD pipelines that bake OpenUI — rebuild if config.py was included in layers. 4. Enable LiteLLM spend tracking and audit logs to detect anomalous API usage that may indicate prior exploitation. 5. Pin to a patched version of OpenUI once available; monitor the upstream repo for a fix given the vendor did not respond to disclosure.
What systems are affected by CVE-2026-4993?
This vulnerability affects the following AI/ML architecture patterns: LLM inference proxies, AI development environments, MLOps pipelines, multi-provider LLM routing.
What is the CVSS score for CVE-2026-4993?
CVE-2026-4993 has a CVSS v3.1 base score of 3.3 (LOW). The EPSS exploitation probability is 0.01%.
Technical Details
NVD Description
A vulnerability has been found in wandb OpenUI up to 0.0.0.0/1.0. This impacts an unknown function of the file backend/openui/config.py. The manipulation of the argument LITELLM_MASTER_KEY leads to hard-coded credentials. An attack has to be approached locally. The exploit has been disclosed to the public and may be used. The vendor was contacted early about this disclosure but did not respond in any way.
Exploitation Scenario
An attacker with low-privileged local access to a shared ML development server reads /app/backend/openui/config.py (world-readable in many container deployments). They extract the LITELLM_MASTER_KEY value and use it to authenticate directly to the LiteLLM proxy API, bypassing all virtual-key spend limits. They then issue high-volume requests to GPT-4 or Claude endpoints (cost harvesting), exfiltrate conversation history from other users' sessions, or create their own admin virtual keys for persistent access — all while appearing to originate from the legitimate LiteLLM proxy.
Weaknesses (CWE)
CVSS Vector
CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:L/I:N/A:N References
Timeline
Related Vulnerabilities
CVE-2026-21858 10.0 n8n: Input Validation flaw enables exploitation
Same attack type: Data Extraction CVE-2025-53767 10.0 Azure OpenAI: SSRF EoP, no auth required (CVSS 10)
Same attack type: Data Extraction CVE-2023-3765 10.0 MLflow: path traversal allows arbitrary file read
Same attack type: Data Extraction CVE-2025-2828 10.0 LangChain RequestsToolkit: SSRF exposes cloud metadata
Same attack type: Data Extraction GHSA-vvpj-8cmc-gx39 10.0 picklescan: security flaw enables exploitation
Same attack type: Auth Bypass