CVE-2023-38860: LangChain: RCE via unsanitized prompt parameter
GHSA-fj32-q626-pjjc CRITICAL PoC AVAILABLE CISA: ATTENDAny application running LangChain < 0.0.247 that accepts user-supplied prompts is exposed to unauthenticated remote code execution. Patch to 0.0.247+ immediately—no workaround preserves full functionality. Audit all LangChain deployments, especially public-facing chatbots, RAG pipelines, and AI agent services; a public PoC exists via GitHub issue #7641.
Risk Assessment
CVSS 9.8 with zero authentication, no user interaction, and network-accessible attack vector makes this trivially exploitable at scale. LangChain is among the most widely deployed LLM frameworks globally, creating broad exposure. EPSS of 1.36% understates operational risk given the framework's prevalence in production AI systems, public PoC availability, and the complete absence of any exploit prerequisite.
Affected Systems
Severity & Risk
Attack Surface
Recommended Action
7 steps-
Upgrade LangChain to >= 0.0.247 immediately across all environments (dev, staging, prod).
-
Inventory all LangChain instances—shadow deployments are the highest risk.
-
Audit application code for any user-controlled input passed to prompt parameters without sanitization.
-
Deploy WAF rules or input validation layers to block code injection payloads at the application boundary as a temporary compensating control.
-
Restrict runtime permissions for LangChain processes (least privilege, no outbound internet, read-only filesystem where feasible).
-
Monitor for anomalous process spawning, unexpected outbound connections, or env variable access from LangChain service processes.
-
Rotate all credentials (API keys, DB passwords) stored in environment variables accessible to any affected LangChain deployment.
CISA SSVC Assessment
Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2023-38860?
Any application running LangChain < 0.0.247 that accepts user-supplied prompts is exposed to unauthenticated remote code execution. Patch to 0.0.247+ immediately—no workaround preserves full functionality. Audit all LangChain deployments, especially public-facing chatbots, RAG pipelines, and AI agent services; a public PoC exists via GitHub issue #7641.
Is CVE-2023-38860 actively exploited?
Proof-of-concept exploit code is publicly available for CVE-2023-38860, increasing the risk of exploitation.
How to fix CVE-2023-38860?
1. Upgrade LangChain to >= 0.0.247 immediately across all environments (dev, staging, prod). 2. Inventory all LangChain instances—shadow deployments are the highest risk. 3. Audit application code for any user-controlled input passed to prompt parameters without sanitization. 4. Deploy WAF rules or input validation layers to block code injection payloads at the application boundary as a temporary compensating control. 5. Restrict runtime permissions for LangChain processes (least privilege, no outbound internet, read-only filesystem where feasible). 6. Monitor for anomalous process spawning, unexpected outbound connections, or env variable access from LangChain service processes. 7. Rotate all credentials (API keys, DB passwords) stored in environment variables accessible to any affected LangChain deployment.
What systems are affected by CVE-2023-38860?
This vulnerability affects the following AI/ML architecture patterns: agent frameworks, RAG pipelines, LLM application backends, chatbot services, document processing pipelines, AI automation workflows.
What is the CVSS score for CVE-2023-38860?
CVE-2023-38860 has a CVSS v3.1 base score of 9.8 (CRITICAL). The EPSS exploitation probability is 1.36%.
Technical Details
NVD Description
An issue in LangChain v.0.0.231 allows a remote attacker to execute arbitrary code via the prompt parameter.
Exploitation Scenario
An adversary identifies a public-facing application built on LangChain—a document Q&A chatbot, an internal AI assistant with an exposed API, or a LangChain-powered automation endpoint. They send a crafted HTTP request embedding a malicious payload in the prompt parameter that exploits LangChain's unsafe code evaluation logic. The payload executes arbitrary Python server-side: extracting OPENAI_API_KEY and DATABASE_URL from environment variables, exfiltrating them to an attacker-controlled server, then dropping a reverse shell. No credentials, no prior access, no social engineering required. The full attack chain takes under 60 seconds using the publicly documented PoC.
Weaknesses (CWE)
CVSS Vector
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H References
- github.com/hwchase17/langchain/issues/7641 Exploit Issue
- github.com/advisories/GHSA-fj32-q626-pjjc
- github.com/langchain-ai/langchain/commit/d353d668e4b0514122a443cef91de7f76fea4245
- github.com/langchain-ai/langchain/commit/fab24457bcf8ede882abd11419769c92bc4e7751
- github.com/langchain-ai/langchain/issues/7641
- github.com/langchain-ai/langchain/pull/8092
- github.com/langchain-ai/langchain/pull/8425
- github.com/pypa/advisory-database/tree/main/vulns/langchain/PYSEC-2023-145.yaml
- nvd.nist.gov/vuln/detail/CVE-2023-38860
Timeline
Related Vulnerabilities
CVE-2025-2828 10.0 LangChain RequestsToolkit: SSRF exposes cloud metadata
Same package: langchain CVE-2023-34541 9.8 LangChain: RCE via unsafe load_prompt deserialization
Same package: langchain CVE-2023-29374 9.8 LangChain: RCE via prompt injection in LLMMathChain
Same package: langchain CVE-2023-34540 9.8 LangChain: RCE via JiraAPIWrapper crafted input
Same package: langchain CVE-2023-36258 9.8 LangChain: unauthenticated RCE via code injection
Same package: langchain
AI Threat Alert