CVE-2026-4993: OpenUI: hard-coded LiteLLM master key credential leak

LOW CISA: TRACK*
Published March 28, 2026
CISO Take

wandb OpenUI hard-codes the LITELLM_MASTER_KEY in config.py, giving any local user with read access to the file full control over the LiteLLM proxy — and by extension, every LLM backend it fronts (OpenAI, Anthropic, Azure, etc.). Rotate your LiteLLM master key immediately and ensure it is injected via environment variable or secrets manager, not baked into source. Audit your container and CI images for this file if OpenUI is part of any shared dev or MLOps environment.

What is the risk?

CVSS 3.3 (Low) reflects the local-only attack vector, but contextual risk for AI teams is higher than the score suggests. LiteLLM master keys grant administrative access to all proxied LLM endpoints — meaning cost harvesting, model abuse, and data exfiltration through the proxy are all unlocked with a single credential theft. In containerized MLOps pipelines or shared dev servers, 'local access' is a low bar. No active exploitation reported; exploit details are public on GitHub Gist.

Severity & Risk

CVSS 3.1
3.3 / 10
EPSS
0.0%
chance of exploitation in 30 days
Higher than 0% of all CVEs
Exploitation Status
Exploit Available
Exploitation: MEDIUM
Sophistication
Trivial
Exploitation Confidence
medium
CISA SSVC: Public PoC
Composite signal derived from CISA KEV, CISA SSVC, EPSS, trickest/cve, and Nuclei templates.

Attack Surface

AV AC PR UI S C I A
AV Local
AC Low
PR Low
UI None
S Unchanged
C Low
I None
A None

What should I do?

5 steps
  1. Immediately rotate the LITELLM_MASTER_KEY on all affected deployments and revoke any derived virtual keys.

  2. Remove hard-coded values from config.py; inject the key via environment variable (LITELLM_MASTER_KEY env var) or a secrets manager (Vault, AWS Secrets Manager, Doppler).

  3. Audit container images and CI/CD pipelines that bake OpenUI — rebuild if config.py was included in layers.

  4. Enable LiteLLM spend tracking and audit logs to detect anomalous API usage that may indicate prior exploitation.

  5. Pin to a patched version of OpenUI once available; monitor the upstream repo for a fix given the vendor did not respond to disclosure.

CISA SSVC Assessment

Decision Track*
Exploitation poc
Automatable No
Technical Impact partial

Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Article 15 - Accuracy, robustness and cybersecurity
ISO 42001
A.6.2.3 - AI system security controls
NIST AI RMF
MANAGE 2.2 - Mechanisms to sustain risk management
OWASP LLM Top 10
LLM05 - Supply Chain Vulnerabilities LLM06 - Sensitive Information Disclosure

Frequently Asked Questions

What is CVE-2026-4993?

wandb OpenUI hard-codes the LITELLM_MASTER_KEY in config.py, giving any local user with read access to the file full control over the LiteLLM proxy — and by extension, every LLM backend it fronts (OpenAI, Anthropic, Azure, etc.). Rotate your LiteLLM master key immediately and ensure it is injected via environment variable or secrets manager, not baked into source. Audit your container and CI images for this file if OpenUI is part of any shared dev or MLOps environment.

Is CVE-2026-4993 actively exploited?

No confirmed active exploitation of CVE-2026-4993 has been reported, but organizations should still patch proactively.

How to fix CVE-2026-4993?

1. Immediately rotate the LITELLM_MASTER_KEY on all affected deployments and revoke any derived virtual keys. 2. Remove hard-coded values from config.py; inject the key via environment variable (LITELLM_MASTER_KEY env var) or a secrets manager (Vault, AWS Secrets Manager, Doppler). 3. Audit container images and CI/CD pipelines that bake OpenUI — rebuild if config.py was included in layers. 4. Enable LiteLLM spend tracking and audit logs to detect anomalous API usage that may indicate prior exploitation. 5. Pin to a patched version of OpenUI once available; monitor the upstream repo for a fix given the vendor did not respond to disclosure.

What systems are affected by CVE-2026-4993?

This vulnerability affects the following AI/ML architecture patterns: LLM inference proxies, AI development environments, MLOps pipelines, multi-provider LLM routing.

What is the CVSS score for CVE-2026-4993?

CVE-2026-4993 has a CVSS v3.1 base score of 3.3 (LOW). The EPSS exploitation probability is 0.01%.

Technical Details

NVD Description

A vulnerability has been found in wandb OpenUI up to 0.0.0.0/1.0. This impacts an unknown function of the file backend/openui/config.py. The manipulation of the argument LITELLM_MASTER_KEY leads to hard-coded credentials. An attack has to be approached locally. The exploit has been disclosed to the public and may be used. The vendor was contacted early about this disclosure but did not respond in any way.

Exploitation Scenario

An attacker with low-privileged local access to a shared ML development server reads /app/backend/openui/config.py (world-readable in many container deployments). They extract the LITELLM_MASTER_KEY value and use it to authenticate directly to the LiteLLM proxy API, bypassing all virtual-key spend limits. They then issue high-volume requests to GPT-4 or Claude endpoints (cost harvesting), exfiltrate conversation history from other users' sessions, or create their own admin virtual keys for persistent access — all while appearing to originate from the legitimate LiteLLM proxy.

CVSS Vector

CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:L/I:N/A:N

Timeline

Published
March 28, 2026
Last Modified
April 24, 2026
First Seen
March 28, 2026

Related Vulnerabilities