CVE-2024-6825: LiteLLM: RCE via post_call_rules callback injection
GHSA-53gh-p8jc-7rg8 HIGH PoC AVAILABLE CISA: ATTENDLiteLLM is widely deployed as an LLM proxy and multi-model gateway; any user with API write access to its configuration can inject a malicious callback (e.g., os.system) into post_call_rules and achieve full OS command execution on the host whenever a chat response is processed. Patch immediately to a version beyond 1.40.12 or remove write access to the post_call_rules configuration endpoint. If patching is not immediately possible, restrict configuration API access to trusted administrators only.
Risk Assessment
High risk. CVSS 8.8 with network-accessible attack vector and only low privileges required creates a wide exploitation window. LiteLLM is commonly deployed as an internal gateway serving multiple teams or as a public-facing proxy — both scenarios expose the configuration endpoint to a broad attacker population. EPSS at 1.35% reflects moderate near-term exploitation likelihood, but the trivial exploitation path (set a config value, trigger any chat call) will accelerate that figure. The absence of a patch version listed and the fact this was publicly disclosed via Huntr amplifies urgency. Any multi-tenant LiteLLM deployment (shared internal AI gateway) should be treated as critically exposed.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| litellm | pip | >= 1.40.3.dev2, <= 1.40.12 | No patch |
Do you use litellm? You're affected.
Severity & Risk
Attack Surface
Recommended Action
6 steps-
PATCH
Upgrade LiteLLM beyond 1.40.12 immediately (monitor the advisory at GHSA-53gh-p8jc-7rg8 for a fixed version tag).
-
RESTRICT
If patching is blocked, lock down the LiteLLM configuration API — restrict POST/PUT to the config endpoint to admin-only service accounts via network policy or API gateway ACLs.
-
AUDIT
Review current post_call_rules values in all LiteLLM config files and running instances; any non-empty value referencing system modules (os, subprocess, sys) is a compromise indicator.
-
DETECT
Alert on LiteLLM process spawning unexpected child processes (os.system, subprocess.Popen); monitor for anomalous outbound connections from the LiteLLM host post-chat-completion.
-
ISOLATE
Run LiteLLM in a container with a read-only filesystem, dropped capabilities, and no internet egress where possible to limit blast radius.
-
ROTATE
After any suspected exploitation, rotate all secrets accessible from the LiteLLM runtime environment (OpenAI/Anthropic API keys, DB credentials, cloud IAM tokens).
CISA SSVC Assessment
Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2024-6825?
LiteLLM is widely deployed as an LLM proxy and multi-model gateway; any user with API write access to its configuration can inject a malicious callback (e.g., os.system) into post_call_rules and achieve full OS command execution on the host whenever a chat response is processed. Patch immediately to a version beyond 1.40.12 or remove write access to the post_call_rules configuration endpoint. If patching is not immediately possible, restrict configuration API access to trusted administrators only.
Is CVE-2024-6825 actively exploited?
Proof-of-concept exploit code is publicly available for CVE-2024-6825, increasing the risk of exploitation.
How to fix CVE-2024-6825?
1. PATCH: Upgrade LiteLLM beyond 1.40.12 immediately (monitor the advisory at GHSA-53gh-p8jc-7rg8 for a fixed version tag). 2. RESTRICT: If patching is blocked, lock down the LiteLLM configuration API — restrict POST/PUT to the config endpoint to admin-only service accounts via network policy or API gateway ACLs. 3. AUDIT: Review current post_call_rules values in all LiteLLM config files and running instances; any non-empty value referencing system modules (os, subprocess, sys) is a compromise indicator. 4. DETECT: Alert on LiteLLM process spawning unexpected child processes (os.system, subprocess.Popen); monitor for anomalous outbound connections from the LiteLLM host post-chat-completion. 5. ISOLATE: Run LiteLLM in a container with a read-only filesystem, dropped capabilities, and no internet egress where possible to limit blast radius. 6. ROTATE: After any suspected exploitation, rotate all secrets accessible from the LiteLLM runtime environment (OpenAI/Anthropic API keys, DB credentials, cloud IAM tokens).
What systems are affected by CVE-2024-6825?
This vulnerability affects the following AI/ML architecture patterns: LLM proxy and gateway deployments, Multi-model routing pipelines, AI agent frameworks, Model serving infrastructure, Internal AI platforms.
What is the CVSS score for CVE-2024-6825?
CVE-2024-6825 has a CVSS v3.1 base score of 8.8 (HIGH). The EPSS exploitation probability is 3.02%.
Technical Details
NVD Description
BerriAI/litellm version 1.40.12 contains a vulnerability that allows remote code execution. The issue exists in the handling of the 'post_call_rules' configuration, where a callback function can be added. The provided value is split at the final '.' mark, with the last part considered the function name and the remaining part appended with the '.py' extension and imported. This allows an attacker to set a system method, such as 'os.system', as a callback, enabling the execution of arbitrary commands when a chat response is processed.
Exploitation Scenario
An attacker with low-privilege access to an organization's internal AI gateway (LiteLLM 1.40.12) — e.g., a developer account or a compromised CI/CD pipeline credential — sends a PATCH request to the LiteLLM configuration endpoint setting post_call_rules to ['os.system']. LiteLLM splits this at the final dot: function name becomes 'system', module path becomes 'os.py' which resolves to Python's built-in os module. The attacker then sends a standard chat completion request. When the response is processed, LiteLLM invokes os.system() with a crafted argument — a reverse shell payload or credential exfiltration command. From the LiteLLM host, the attacker now has shell access, exfiltrates all API keys stored in the environment (including keys to GPT-4, Claude, Gemini endpoints), and pivots to connected infrastructure such as vector databases or model registries. In a multi-tenant SaaS AI platform, one compromised developer account is sufficient to own the entire LLM gateway and all its downstream integrations.
Weaknesses (CWE)
CVSS Vector
CVSS:3.0/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H References
- github.com/BerriAI/litellm/blob/056913fd7049923a106130b02d7c29e7f312beec/litellm/utils.py
- github.com/advisories/GHSA-53gh-p8jc-7rg8
- github.com/berriai/litellm/commit/441c7275ed2715f47650a7c2e525055c804073a9
- huntr.com/bounties/1d98bebb-6cf4-46c9-87c3-d3b1972973b5
- nvd.nist.gov/vuln/detail/CVE-2024-6825
- github.com/ARPSyndicate/cve-scores Exploit
Timeline
Related Vulnerabilities
CVE-2026-42208 9.8 LiteLLM: SQL injection exposes LLM API credentials
Same package: litellm CVE-2026-35030 9.1 LiteLLM: auth bypass via JWT cache key collision
Same package: litellm CVE-2026-40217 8.8 LiteLLM: RCE via bytecode rewriting in guardrails API
Same package: litellm CVE-2026-42271 8.8 LiteLLM: RCE via MCP test endpoint command injection
Same package: litellm CVE-2025-0628 8.1 litellm: privilege escalation viewer→proxy admin via bad API key
Same package: litellm
AI Threat Alert