GHSA-69x8-hrgq-fjj8: LiteLLM: auth bypass chain enables full privilege escalation
GHSA-69x8-hrgq-fjj8 HIGHLiteLLM contains a three-step authentication bypass chain requiring only a valid low-privilege account: unsalted SHA-256 password hashes are exposed directly in API responses from /user/info, /user/update, and /spend/users, and the /v2/login endpoint accepts those raw hashes as valid credentials — allowing any authenticated user to hijack any account, including admin, in three HTTP requests. With 2,002 downstream dependents, a package risk score of 79/100, and 12 prior CVEs in the same package, LiteLLM is a high-value target in enterprise AI stacks where it proxies credentials for OpenAI, Anthropic, Azure OpenAI, and other LLM providers. Exploitation requires no specialized skill beyond basic API knowledge and valid credentials at any privilege level. Upgrade to v1.83.0 immediately, then rotate all LiteLLM admin passwords and any LLM provider API keys configured in the instance.
Risk Assessment
High. The attack requires only authenticated access at any privilege level — a common starting point for insider threats, compromised developer accounts, or attackers who phished a low-privilege user. The exploitation chain is trivially mechanical: three sequential HTTP requests with no AI/ML expertise required. The blast radius is severe for organizations using LiteLLM as their LLM gateway, since admin access exposes all configured provider API keys, spending controls, and model routing. No public exploit confirmed and not in CISA KEV, but the technique is simple enough to rediscover independently. OpenSSF score of 5.9/10 and 12 prior CVEs suggest a pattern of insufficient security investment in this package.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| litellm | pip | < 1.83.0 | 1.83.0 |
Do you use litellm? You're affected.
Severity & Risk
Recommended Action
- Upgrade LiteLLM to v1.83.0 immediately — scrypt migration is transparent on next login.
- Force-reset all user passwords post-upgrade to invalidate any previously exposed SHA-256 hashes.
- Rotate all LLM provider API keys (OpenAI, Anthropic, Azure, etc.) configured in LiteLLM as a precaution.
- Review access logs for calls to /user/info, /user/update, and /spend/users from non-admin accounts prior to patching — these are the hash exfiltration vectors.
- If immediate upgrade is blocked, restrict those three endpoints at the reverse proxy or WAF level to admin-source IPs only.
- Enable alerting on cross-user authentication patterns (user A authenticating after querying user B's profile).
Classification
Compliance Impact
This CVE is relevant to:
Technical Details
NVD Description
### Impact Three issues combine into a full authentication bypass chain: 1. Weak hashing: User passwords are stored as unsalted SHA-256 hashes, making them vulnerable to rainbow table attacks and trivially identifying users with identical passwords. 2. Hash exposure: Multiple API endpoints (/user/info, /user/update, /spend/users) return the password hash field in responses to any authenticated user regardless of role. Plaintext passwords could also potentially be exposed in certain scenarios. 4. Pass-the-hash: The /v2/login endpoint accepts the raw SHA-256 hash as a valid password without re-hashing, allowing direct login with a stolen An already authenticated user can retrieve another user's password hash from the API and use it to log in as that user. This enables full privilege escalation in three HTTP requests. ### Patches Fixed in v1.83.0. Passwords are now hashed with scrypt (random 16-byte salt, n=16384, r=8, p=1). Password hashes are stripped from all API responses. Existing SHA-256 hashes are transparently migrated on next login.
Exploitation Scenario
An attacker with any valid LiteLLM account — e.g., a developer added to the instance — calls GET /spend/users or GET /user/info, which return the SHA-256 password hash for all users including admins in the JSON response. The attacker extracts the admin hash and POSTs it directly to /v2/login as the password field, receiving a valid admin session token. With admin access they dump all configured LLM provider API keys, redirect model routing to log all inference traffic to an external server, or create backdoor admin accounts. Total time from low-priv access to full compromise: under 60 seconds with a basic HTTP client.
Weaknesses (CWE)
References
Timeline
Related Vulnerabilities
CVE-2026-35030 9.1 LiteLLM: auth bypass via JWT cache key collision
Same package: litellm CVE-2024-6825 8.8 LiteLLM: RCE via post_call_rules callback injection
Same package: litellm CVE-2025-0628 8.1 litellm: privilege escalation viewer→proxy admin via bad API key
Same package: litellm CVE-2024-4888 8.1 litellm: arbitrary file deletion via audio endpoint
Same package: litellm CVE-2024-8984 7.5 litellm: unauthenticated DoS via multipart boundary parsing
Same package: litellm
AI Threat Alert