CVE-2024-4181: llama_index: RCE via eval() in RunGptLLM connector
UNKNOWN PoC AVAILABLEIf your team uses llama_index's RunGptLLM class (JinaAI RunGpt integration), upgrade to v0.10.13 or later immediately. A malicious or compromised LLM hosting provider can execute arbitrary code on client machines via unsanitized eval() calls. Patching is necessary but insufficient — also audit which LLM providers you trust with execution-level access to your infrastructure.
Risk Assessment
Effectively Critical despite missing CVSS scores. The attack requires an adversary to control or compromise an LLM hosting provider, raising the bar slightly — but that scenario is realistic given supply chain attacks, MITM, or provider breaches. Impact is full RCE on any machine running the vulnerable library. AI/ML pipelines are particularly exposed as they often run with elevated privileges and broad network access to sensitive internal systems and secrets stores.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| llamaindex | pip | — | No patch |
Do you use llamaindex? You're affected.
Severity & Risk
Recommended Action
6 steps-
Upgrade llama_index to v0.10.13 or later immediately — this is the only complete fix.
-
Audit all LLM connector classes in your llama_index deployments for eval() or exec() patterns in response handling.
-
Enumerate which LLM hosting providers your AI pipelines connect to; apply zero-trust principles and validate provider authenticity.
-
Network-segment AI inference hosts to limit blast radius from RCE — restrict outbound connections from pipeline processes.
-
Monitor AI pipeline processes for unexpected subprocess spawning, outbound connections, or credential access patterns.
-
Rotate any secrets (API keys, DB credentials) accessible from environments running the vulnerable version.
CISA SSVC Assessment
Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2024-4181?
If your team uses llama_index's RunGptLLM class (JinaAI RunGpt integration), upgrade to v0.10.13 or later immediately. A malicious or compromised LLM hosting provider can execute arbitrary code on client machines via unsanitized eval() calls. Patching is necessary but insufficient — also audit which LLM providers you trust with execution-level access to your infrastructure.
Is CVE-2024-4181 actively exploited?
Proof-of-concept exploit code is publicly available for CVE-2024-4181, increasing the risk of exploitation.
How to fix CVE-2024-4181?
1. Upgrade llama_index to v0.10.13 or later immediately — this is the only complete fix. 2. Audit all LLM connector classes in your llama_index deployments for eval() or exec() patterns in response handling. 3. Enumerate which LLM hosting providers your AI pipelines connect to; apply zero-trust principles and validate provider authenticity. 4. Network-segment AI inference hosts to limit blast radius from RCE — restrict outbound connections from pipeline processes. 5. Monitor AI pipeline processes for unexpected subprocess spawning, outbound connections, or credential access patterns. 6. Rotate any secrets (API keys, DB credentials) accessible from environments running the vulnerable version.
What systems are affected by CVE-2024-4181?
This vulnerability affects the following AI/ML architecture patterns: agent frameworks, LLM integrations, RAG pipelines, AI application backends, inference pipelines.
What is the CVSS score for CVE-2024-4181?
No CVSS score has been assigned yet.
Technical Details
NVD Description
A command injection vulnerability exists in the RunGptLLM class of the llama_index library, version 0.9.47, used by the RunGpt framework from JinaAI to connect to Language Learning Models (LLMs). The vulnerability arises from the improper use of the eval function, allowing a malicious or compromised LLM hosting provider to execute arbitrary commands on the client's machine. This issue was fixed in version 0.10.13. The exploitation of this vulnerability could lead to a hosting provider gaining full control over client machines.
Exploitation Scenario
An adversary operates or compromises an LLM hosting provider compatible with the JinaAI RunGpt interface. When a victim's application queries the LLM via llama_index's RunGptLLM connector, the provider returns a crafted response containing malicious Python code. The unpatched library passes this response directly to eval(), executing the payload with the privileges of the AI application process. The attacker achieves RCE sufficient to exfiltrate API keys, database credentials, and model artifacts, or to establish a reverse shell for persistent access — all triggered silently during a routine LLM inference call.
Weaknesses (CWE)
References
Timeline
Related Vulnerabilities
CVE-2024-23751 9.8 LlamaIndex: SQL injection in Text-to-SQL feature
Same package: llamaindex CVE-2024-14021 7.8 llamaindex: Deserialization enables RCE
Same package: llamaindex CVE-2024-12704 7.5 llama-index: DoS via infinite loop in LangChain LLM
Same package: llamaindex CVE-2024-58339 7.5 llamaindex: Resource Exhaustion enables DoS
Same package: llamaindex CVE-2024-12911 7.1 llama-index: SQLi+DoS via prompt injection in query engine
Same package: llamaindex
AI Threat Alert