GHSA-5mg7-485q-xm76: litellm: supply chain attack harvests AI API credentials
GHSA-5mg7-485q-xm76 CRITICALTwo litellm PyPI releases (1.82.7–1.82.8) contained auto-activating malware that exfiltrated credentials and files to an attacker-controlled endpoint. Any environment that installed and ran these versions should be treated as fully compromised — rotate all credentials immediately, especially LLM provider API keys (OpenAI, Anthropic, Azure, etc.), cloud credentials, and any secrets accessible to the litellm process. Upgrade to a clean release and audit your Python dependency pipeline for similar exposure.
What is the risk?
CRITICAL. litellm is a widely deployed LLM proxy used across AI/ML pipelines in production environments — it sits at the intersection of LLM API keys, cloud credentials, and potentially sensitive prompt data. The malware auto-activated without user interaction beyond a routine pip install/upgrade. The attack vector is trivially reproducible by anyone with PyPI upload access, and the blast radius includes every downstream AI system with credentials in scope. The supply chain entry point (a stolen API token from a transitive dependency) demonstrates chained third-party risk that traditional SCA tooling would not have caught prior to publication.
What systems are affected?
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| litellm | pip | >= 1.82.7, <= 1.82.8 | No patch |
Do you use litellm? You're affected.
Severity & Risk
What should I do?
7 steps-
IMMEDIATE
Identify all environments with litellm 1.82.7 or 1.82.8 installed via 'pip show litellm' or dependency lock files. Treat as fully compromised.
-
Rotate all credentials that were accessible to the litellm process: LLM API keys (OpenAI, Anthropic, Azure, Cohere, etc.), cloud provider credentials, database passwords, and any secrets in environment variables.
-
Upgrade litellm to a clean release per the vendor security advisory (docs.litellm.ai/blog/security-update-march-2026).
-
Audit outbound network traffic logs from affected hosts for connections to unknown external IPs/domains during the exposure window (post-2026-03-25).
-
Review CI/CD pipelines and container images that may have cached the malicious package layers.
-
Detection: Search pip audit output, SBOM, and lock files for the affected version range. Indicator of compromise: litellm_init.pth file present (auto-executes on Python startup), anomalous code in litellm/proxy/proxy_server.py near line 130.
-
Enable PyPI package integrity verification and consider private mirror or allowlist for production AI environments.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is GHSA-5mg7-485q-xm76?
Two litellm PyPI releases (1.82.7–1.82.8) contained auto-activating malware that exfiltrated credentials and files to an attacker-controlled endpoint. Any environment that installed and ran these versions should be treated as fully compromised — rotate all credentials immediately, especially LLM provider API keys (OpenAI, Anthropic, Azure, etc.), cloud credentials, and any secrets accessible to the litellm process. Upgrade to a clean release and audit your Python dependency pipeline for similar exposure.
Is GHSA-5mg7-485q-xm76 actively exploited?
No confirmed active exploitation of GHSA-5mg7-485q-xm76 has been reported, but organizations should still patch proactively.
How to fix GHSA-5mg7-485q-xm76?
1. IMMEDIATE: Identify all environments with litellm 1.82.7 or 1.82.8 installed via 'pip show litellm' or dependency lock files. Treat as fully compromised. 2. Rotate all credentials that were accessible to the litellm process: LLM API keys (OpenAI, Anthropic, Azure, Cohere, etc.), cloud provider credentials, database passwords, and any secrets in environment variables. 3. Upgrade litellm to a clean release per the vendor security advisory (docs.litellm.ai/blog/security-update-march-2026). 4. Audit outbound network traffic logs from affected hosts for connections to unknown external IPs/domains during the exposure window (post-2026-03-25). 5. Review CI/CD pipelines and container images that may have cached the malicious package layers. 6. Detection: Search pip audit output, SBOM, and lock files for the affected version range. Indicator of compromise: litellm_init.pth file present (auto-executes on Python startup), anomalous code in litellm/proxy/proxy_server.py near line 130. 7. Enable PyPI package integrity verification and consider private mirror or allowlist for production AI environments.
What systems are affected by GHSA-5mg7-485q-xm76?
This vulnerability affects the following AI/ML architecture patterns: LLM API proxies and routers, Agent frameworks, RAG pipelines, Model serving, CI/CD and MLOps pipelines.
What is the CVSS score for GHSA-5mg7-485q-xm76?
No CVSS score has been assigned yet.
Technical Details
NVD Description
After an API Token exposure from an exploited trivy dependency, two new releases of `litellm` were uploaded to PyPI containing automatically activated malware, harvesting sensitive credentials and files, and exfiltrating to a remote API. Anyone who has installed and run the project should assume any credentials available to litellm environment may have been exposed, and revoke/rotate thema ccordingly.
Exploitation Scenario
An adversary exploits a GitHub Actions workflow or CI credential in a transitive dependency (trivy in this case) to obtain a PyPI API token. They upload two new litellm releases containing malicious code embedded in proxy_server.py and a .pth file (which Python auto-executes at interpreter startup). Any engineer or automated pipeline running 'pip install litellm' or 'pip install --upgrade litellm' in the affected window silently installs the backdoor. On first Python startup, the malware enumerates environment variables, searches for credential files (.aws/credentials, .config, API key files), and POSTs the harvest to an attacker-controlled endpoint — no user interaction, no elevated privileges, no anomalous process required beyond normal litellm operation. In an AI/ML context, this yields LLM API keys enabling the attacker to run inference at the victim's expense, access proprietary prompts/data, or pivot to cloud environments via harvested cloud credentials.
Weaknesses (CWE)
References
- docs.litellm.ai/blog/security-update-march-2026
- futuresearch.ai/blog/litellm-pypi-supply-chain-attack
- github.com/BerriAI/litellm/issues/24518
- github.com/advisories/GHSA-5mg7-485q-xm76
- github.com/pypa/advisory-database/tree/main/vulns/litellm/PYSEC-2026-2.yaml
- inspector.pypi.io/project/litellm/1.82.7/packages/79/5f/b6998d42c6ccd32d36e12661f2734602e72a576d52a51f4245aef0b20b4d/litellm-1.82.7-py3-none-any.whl/litellm/proxy/proxy_server.py
- inspector.pypi.io/project/litellm/1.82.8/packages/f6/2c/731b614e6cee0bca1e010a36fd381fba69ee836fe3cb6753ba23ef2b9601/litellm-1.82.8.tar.gz/litellm-1.82.8/litellm_init.pth
- wiz.io/blog/teampcp-attack-kics-github-action
Timeline
Related Vulnerabilities
CVE-2026-42208 9.8 LiteLLM: SQL injection exposes LLM API credentials
Same package: litellm CVE-2026-35030 9.1 LiteLLM: auth bypass via JWT cache key collision
Same package: litellm CVE-2024-6825 8.8 LiteLLM: RCE via post_call_rules callback injection
Same package: litellm CVE-2026-40217 8.8 LiteLLM: RCE via bytecode rewriting in guardrails API
Same package: litellm CVE-2026-42271 8.8 LiteLLM: RCE via MCP test endpoint command injection
Same package: litellm