litellm: supply chain attack harvests AI API credentials
Two litellm PyPI releases (1.82.7–1.82.8) contained auto-activating malware that exfiltrated credentials and files to an attacker-controlled endpoint. Any environment that installed and ran these versions should be treated as fully compromised — rotate all credentials immediately, especially LLM provider API keys (OpenAI, Anthropic, Azure, etc.), cloud credentials, and any secrets accessible to the litellm process. Upgrade to a clean release and audit your Python dependency pipeline for similar exposure.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| litellm | pip | >= 1.82.7, <= 1.82.8 | No patch |
Do you use litellm? You're affected.
Severity & Risk
Recommended Action
- 1. IMMEDIATE: Identify all environments with litellm 1.82.7 or 1.82.8 installed via 'pip show litellm' or dependency lock files. Treat as fully compromised.
- 2. Rotate all credentials that were accessible to the litellm process: LLM API keys (OpenAI, Anthropic, Azure, Cohere, etc.), cloud provider credentials, database passwords, and any secrets in environment variables.
- 3. Upgrade litellm to a clean release per the vendor security advisory (docs.litellm.ai/blog/security-update-march-2026).
- 4. Audit outbound network traffic logs from affected hosts for connections to unknown external IPs/domains during the exposure window (post-2026-03-25).
- 5. Review CI/CD pipelines and container images that may have cached the malicious package layers.
- 6. Detection: Search pip audit output, SBOM, and lock files for the affected version range. Indicator of compromise: litellm_init.pth file present (auto-executes on Python startup), anomalous code in litellm/proxy/proxy_server.py near line 130.
- 7. Enable PyPI package integrity verification and consider private mirror or allowlist for production AI environments.
Classification
Compliance Impact
This CVE is relevant to:
Technical Details
NVD Description
After an API Token exposure from an exploited trivy dependency, two new releases of `litellm` were uploaded to PyPI containing automatically activated malware, harvesting sensitive credentials and files, and exfiltrating to a remote API. Anyone who has installed and run the project should assume any credentials available to litellm environment may have been exposed, and revoke/rotate thema ccordingly.
Exploitation Scenario
An adversary exploits a GitHub Actions workflow or CI credential in a transitive dependency (trivy in this case) to obtain a PyPI API token. They upload two new litellm releases containing malicious code embedded in proxy_server.py and a .pth file (which Python auto-executes at interpreter startup). Any engineer or automated pipeline running 'pip install litellm' or 'pip install --upgrade litellm' in the affected window silently installs the backdoor. On first Python startup, the malware enumerates environment variables, searches for credential files (.aws/credentials, .config, API key files), and POSTs the harvest to an attacker-controlled endpoint — no user interaction, no elevated privileges, no anomalous process required beyond normal litellm operation. In an AI/ML context, this yields LLM API keys enabling the attacker to run inference at the victim's expense, access proprietary prompts/data, or pivot to cloud environments via harvested cloud credentials.
Weaknesses (CWE)
References
- docs.litellm.ai/blog/security-update-march-2026
- futuresearch.ai/blog/litellm-pypi-supply-chain-attack
- github.com/BerriAI/litellm/issues/24518
- github.com/advisories/GHSA-5mg7-485q-xm76
- github.com/pypa/advisory-database/tree/main/vulns/litellm/PYSEC-2026-2.yaml
- inspector.pypi.io/project/litellm/1.82.7/packages/79/5f/b6998d42c6ccd32d36e12661f2734602e72a576d52a51f4245aef0b20b4d/litellm-1.82.7-py3-none-any.whl/litellm/proxy/proxy_server.py
- inspector.pypi.io/project/litellm/1.82.8/packages/f6/2c/731b614e6cee0bca1e010a36fd381fba69ee836fe3cb6753ba23ef2b9601/litellm-1.82.8.tar.gz/litellm-1.82.8/litellm_init.pth
- wiz.io/blog/teampcp-attack-kics-github-action
AI Threat Alert