CVE-2023-38860: LangChain: RCE via unsanitized prompt parameter

GHSA-fj32-q626-pjjc CRITICAL PoC AVAILABLE CISA: ATTEND
Published August 15, 2023
CISO Take

Any application running LangChain < 0.0.247 that accepts user-supplied prompts is exposed to unauthenticated remote code execution. Patch to 0.0.247+ immediately—no workaround preserves full functionality. Audit all LangChain deployments, especially public-facing chatbots, RAG pipelines, and AI agent services; a public PoC exists via GitHub issue #7641.

Risk Assessment

CVSS 9.8 with zero authentication, no user interaction, and network-accessible attack vector makes this trivially exploitable at scale. LangChain is among the most widely deployed LLM frameworks globally, creating broad exposure. EPSS of 1.36% understates operational risk given the framework's prevalence in production AI systems, public PoC availability, and the complete absence of any exploit prerequisite.

Affected Systems

Package Ecosystem Vulnerable Range Patched
langchain pip No patch
136.3K OpenSSF 6.4 2.6K dependents Pushed today 17% patched ~256d to patch Full package profile →
langchain pip >= 0, < 0.0.247 0.0.247
136.3K OpenSSF 6.4 2.6K dependents Pushed today 17% patched ~256d to patch Full package profile →

Severity & Risk

CVSS 3.1
9.8 / 10
EPSS
1.4%
chance of exploitation in 30 days
Higher than 80% of all CVEs
Exploitation Status
Exploit Available
Exploitation: MEDIUM
Sophistication
Trivial
Exploitation Confidence
medium
CISA SSVC: Public PoC
Public PoC indexed (trickest/cve)
Composite signal derived from CISA KEV, CISA SSVC, EPSS, trickest/cve, and Nuclei templates.

Attack Surface

AV AC PR UI S C I A
AV Network
AC Low
PR None
UI None
S Unchanged
C High
I High
A High

Recommended Action

7 steps
  1. Upgrade LangChain to >= 0.0.247 immediately across all environments (dev, staging, prod).

  2. Inventory all LangChain instances—shadow deployments are the highest risk.

  3. Audit application code for any user-controlled input passed to prompt parameters without sanitization.

  4. Deploy WAF rules or input validation layers to block code injection payloads at the application boundary as a temporary compensating control.

  5. Restrict runtime permissions for LangChain processes (least privilege, no outbound internet, read-only filesystem where feasible).

  6. Monitor for anomalous process spawning, unexpected outbound connections, or env variable access from LangChain service processes.

  7. Rotate all credentials (API keys, DB passwords) stored in environment variables accessible to any affected LangChain deployment.

CISA SSVC Assessment

Decision Attend
Exploitation poc
Automatable Yes
Technical Impact total

Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Article 15 - Accuracy, Robustness and Cybersecurity
ISO 42001
6.1.2 - AI Risk Assessment 8.4 - AI System Lifecycle — Risk Controls
NIST AI RMF
GOVERN-6.1 - Policies for Third-Party AI Risk MANAGE-2.2 - Manage AI Risks — Deployment Context
OWASP LLM Top 10
LLM01 - Prompt Injection LLM02 - Insecure Output Handling LLM05 - Supply Chain Vulnerabilities

Frequently Asked Questions

What is CVE-2023-38860?

Any application running LangChain < 0.0.247 that accepts user-supplied prompts is exposed to unauthenticated remote code execution. Patch to 0.0.247+ immediately—no workaround preserves full functionality. Audit all LangChain deployments, especially public-facing chatbots, RAG pipelines, and AI agent services; a public PoC exists via GitHub issue #7641.

Is CVE-2023-38860 actively exploited?

Proof-of-concept exploit code is publicly available for CVE-2023-38860, increasing the risk of exploitation.

How to fix CVE-2023-38860?

1. Upgrade LangChain to >= 0.0.247 immediately across all environments (dev, staging, prod). 2. Inventory all LangChain instances—shadow deployments are the highest risk. 3. Audit application code for any user-controlled input passed to prompt parameters without sanitization. 4. Deploy WAF rules or input validation layers to block code injection payloads at the application boundary as a temporary compensating control. 5. Restrict runtime permissions for LangChain processes (least privilege, no outbound internet, read-only filesystem where feasible). 6. Monitor for anomalous process spawning, unexpected outbound connections, or env variable access from LangChain service processes. 7. Rotate all credentials (API keys, DB passwords) stored in environment variables accessible to any affected LangChain deployment.

What systems are affected by CVE-2023-38860?

This vulnerability affects the following AI/ML architecture patterns: agent frameworks, RAG pipelines, LLM application backends, chatbot services, document processing pipelines, AI automation workflows.

What is the CVSS score for CVE-2023-38860?

CVE-2023-38860 has a CVSS v3.1 base score of 9.8 (CRITICAL). The EPSS exploitation probability is 1.36%.

Technical Details

NVD Description

An issue in LangChain v.0.0.231 allows a remote attacker to execute arbitrary code via the prompt parameter.

Exploitation Scenario

An adversary identifies a public-facing application built on LangChain—a document Q&A chatbot, an internal AI assistant with an exposed API, or a LangChain-powered automation endpoint. They send a crafted HTTP request embedding a malicious payload in the prompt parameter that exploits LangChain's unsafe code evaluation logic. The payload executes arbitrary Python server-side: extracting OPENAI_API_KEY and DATABASE_URL from environment variables, exfiltrating them to an attacker-controlled server, then dropping a reverse shell. No credentials, no prior access, no social engineering required. The full attack chain takes under 60 seconds using the publicly documented PoC.

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H

Timeline

Published
August 15, 2023
Last Modified
November 21, 2024
First Seen
August 15, 2023

Related Vulnerabilities