GHSA-5mg7-485q-xm76: litellm: supply chain attack harvests AI API credentials

GHSA-5mg7-485q-xm76 CRITICAL
Published March 25, 2026
CISO Take

Two litellm PyPI releases (1.82.7–1.82.8) contained auto-activating malware that exfiltrated credentials and files to an attacker-controlled endpoint. Any environment that installed and ran these versions should be treated as fully compromised — rotate all credentials immediately, especially LLM provider API keys (OpenAI, Anthropic, Azure, etc.), cloud credentials, and any secrets accessible to the litellm process. Upgrade to a clean release and audit your Python dependency pipeline for similar exposure.

What is the risk?

CRITICAL. litellm is a widely deployed LLM proxy used across AI/ML pipelines in production environments — it sits at the intersection of LLM API keys, cloud credentials, and potentially sensitive prompt data. The malware auto-activated without user interaction beyond a routine pip install/upgrade. The attack vector is trivially reproducible by anyone with PyPI upload access, and the blast radius includes every downstream AI system with credentials in scope. The supply chain entry point (a stolen API token from a transitive dependency) demonstrates chained third-party risk that traditional SCA tooling would not have caught prior to publication.

What systems are affected?

Package Ecosystem Vulnerable Range Patched
litellm pip >= 1.82.7, <= 1.82.8 No patch
46.4K OpenSSF 6.5 4 dependents Pushed 3d ago 55% patched ~42d to patch Full package profile →

Do you use litellm? You're affected.

Severity & Risk

CVSS 3.1
N/A
EPSS
N/A
Exploitation Status
No known exploitation
Sophistication
Moderate

What should I do?

7 steps
  1. IMMEDIATE

    Identify all environments with litellm 1.82.7 or 1.82.8 installed via 'pip show litellm' or dependency lock files. Treat as fully compromised.

  2. Rotate all credentials that were accessible to the litellm process: LLM API keys (OpenAI, Anthropic, Azure, Cohere, etc.), cloud provider credentials, database passwords, and any secrets in environment variables.

  3. Upgrade litellm to a clean release per the vendor security advisory (docs.litellm.ai/blog/security-update-march-2026).

  4. Audit outbound network traffic logs from affected hosts for connections to unknown external IPs/domains during the exposure window (post-2026-03-25).

  5. Review CI/CD pipelines and container images that may have cached the malicious package layers.

  6. Detection: Search pip audit output, SBOM, and lock files for the affected version range. Indicator of compromise: litellm_init.pth file present (auto-executes on Python startup), anomalous code in litellm/proxy/proxy_server.py near line 130.

  7. Enable PyPI package integrity verification and consider private mirror or allowlist for production AI environments.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Art.15 - Accuracy, robustness and cybersecurity Art.9 - Risk management system
ISO 42001
A.6.1.3 - Third-party AI component and supply chain security A.6.2.6 - AI system supply chain security A.9.4 - Protection of AI system resources
NIST AI RMF
GOVERN 6.1 - AI supply chain risk policies and procedures GOVERN-6 - Policies and procedures for AI risk in third-party entities MANAGE-2.4 - Mechanisms for incident response
OWASP LLM Top 10
LLM03:2025 - Supply Chain Vulnerabilities LLM05:2025 - Improper Output Handling / Supply Chain Vulnerabilities LLM09:2025 - Misinformation / Overreliance

Frequently Asked Questions

What is GHSA-5mg7-485q-xm76?

Two litellm PyPI releases (1.82.7–1.82.8) contained auto-activating malware that exfiltrated credentials and files to an attacker-controlled endpoint. Any environment that installed and ran these versions should be treated as fully compromised — rotate all credentials immediately, especially LLM provider API keys (OpenAI, Anthropic, Azure, etc.), cloud credentials, and any secrets accessible to the litellm process. Upgrade to a clean release and audit your Python dependency pipeline for similar exposure.

Is GHSA-5mg7-485q-xm76 actively exploited?

No confirmed active exploitation of GHSA-5mg7-485q-xm76 has been reported, but organizations should still patch proactively.

How to fix GHSA-5mg7-485q-xm76?

1. IMMEDIATE: Identify all environments with litellm 1.82.7 or 1.82.8 installed via 'pip show litellm' or dependency lock files. Treat as fully compromised. 2. Rotate all credentials that were accessible to the litellm process: LLM API keys (OpenAI, Anthropic, Azure, Cohere, etc.), cloud provider credentials, database passwords, and any secrets in environment variables. 3. Upgrade litellm to a clean release per the vendor security advisory (docs.litellm.ai/blog/security-update-march-2026). 4. Audit outbound network traffic logs from affected hosts for connections to unknown external IPs/domains during the exposure window (post-2026-03-25). 5. Review CI/CD pipelines and container images that may have cached the malicious package layers. 6. Detection: Search pip audit output, SBOM, and lock files for the affected version range. Indicator of compromise: litellm_init.pth file present (auto-executes on Python startup), anomalous code in litellm/proxy/proxy_server.py near line 130. 7. Enable PyPI package integrity verification and consider private mirror or allowlist for production AI environments.

What systems are affected by GHSA-5mg7-485q-xm76?

This vulnerability affects the following AI/ML architecture patterns: LLM API proxies and routers, Agent frameworks, RAG pipelines, Model serving, CI/CD and MLOps pipelines.

What is the CVSS score for GHSA-5mg7-485q-xm76?

No CVSS score has been assigned yet.

Technical Details

NVD Description

After an API Token exposure from an exploited trivy dependency, two new releases of `litellm` were uploaded to PyPI containing automatically activated malware, harvesting sensitive credentials and files, and exfiltrating to a remote API. Anyone who has installed and run the project should assume any credentials available to litellm environment may have been exposed, and revoke/rotate thema ccordingly.

Exploitation Scenario

An adversary exploits a GitHub Actions workflow or CI credential in a transitive dependency (trivy in this case) to obtain a PyPI API token. They upload two new litellm releases containing malicious code embedded in proxy_server.py and a .pth file (which Python auto-executes at interpreter startup). Any engineer or automated pipeline running 'pip install litellm' or 'pip install --upgrade litellm' in the affected window silently installs the backdoor. On first Python startup, the malware enumerates environment variables, searches for credential files (.aws/credentials, .config, API key files), and POSTs the harvest to an attacker-controlled endpoint — no user interaction, no elevated privileges, no anomalous process required beyond normal litellm operation. In an AI/ML context, this yields LLM API keys enabling the attacker to run inference at the victim's expense, access proprietary prompts/data, or pivot to cloud environments via harvested cloud credentials.

Timeline

Published
March 25, 2026
Last Modified
March 27, 2026
First Seen
March 27, 2026

Related Vulnerabilities