CVE-2026-4993

LOW

OpenUI: hard-coded LiteLLM master key credential leak

Published March 28, 2026
CISO Take

wandb OpenUI hard-codes the LITELLM_MASTER_KEY in config.py, giving any local user with read access to the file full control over the LiteLLM proxy — and by extension, every LLM backend it fronts (OpenAI, Anthropic, Azure, etc.). Rotate your LiteLLM master key immediately and ensure it is injected via environment variable or secrets manager, not baked into source. Audit your container and CI images for this file if OpenUI is part of any shared dev or MLOps environment.

Severity & Risk

CVSS 3.1
3.3 / 10
EPSS
N/A
KEV Status
Not in KEV
Sophistication
Trivial

Recommended Action

  1. 1. Immediately rotate the LITELLM_MASTER_KEY on all affected deployments and revoke any derived virtual keys. 2. Remove hard-coded values from config.py; inject the key via environment variable (LITELLM_MASTER_KEY env var) or a secrets manager (Vault, AWS Secrets Manager, Doppler). 3. Audit container images and CI/CD pipelines that bake OpenUI — rebuild if config.py was included in layers. 4. Enable LiteLLM spend tracking and audit logs to detect anomalous API usage that may indicate prior exploitation. 5. Pin to a patched version of OpenUI once available; monitor the upstream repo for a fix given the vendor did not respond to disclosure.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Article 15 - Accuracy, robustness and cybersecurity
ISO 42001
A.6.2.3 - AI system security controls
NIST AI RMF
MANAGE 2.2 - Mechanisms to sustain risk management
OWASP LLM Top 10
LLM05 - Supply Chain Vulnerabilities LLM06 - Sensitive Information Disclosure

Technical Details

NVD Description

A vulnerability has been found in wandb OpenUI up to 0.0.0.0/1.0. This impacts an unknown function of the file backend/openui/config.py. The manipulation of the argument LITELLM_MASTER_KEY leads to hard-coded credentials. An attack has to be approached locally. The exploit has been disclosed to the public and may be used. The vendor was contacted early about this disclosure but did not respond in any way.

Exploitation Scenario

An attacker with low-privileged local access to a shared ML development server reads /app/backend/openui/config.py (world-readable in many container deployments). They extract the LITELLM_MASTER_KEY value and use it to authenticate directly to the LiteLLM proxy API, bypassing all virtual-key spend limits. They then issue high-volume requests to GPT-4 or Claude endpoints (cost harvesting), exfiltrate conversation history from other users' sessions, or create their own admin virtual keys for persistent access — all while appearing to originate from the legitimate LiteLLM proxy.

CVSS Vector

CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:L/I:N/A:N

Timeline

Published
March 28, 2026
Last Modified
March 28, 2026
First Seen
March 28, 2026