CVE-2023-36281: LangChain: RCE via malicious JSON prompt template

CRITICAL PoC AVAILABLE CISA: ATTEND
Published August 22, 2023
CISO Take

Any LangChain deployment on v0.0.171 or earlier that loads prompt templates from JSON files is vulnerable to unauthenticated remote code execution — no user interaction required. Update to v0.0.312+ immediately and audit all uses of load_prompt() for untrusted input paths. If you cannot patch now, disable external prompt file loading and treat prompt template sources as a trust boundary.

Risk Assessment

Severity is maximal: CVSS 9.8 with network-accessible, zero-authentication, zero-interaction exploitation. The __subclasses__ Python class traversal technique is well-documented and PoC code is publicly available, making this trivially exploitable by script-kiddies. LangChain was the dominant LLM framework at time of disclosure, meaning blast radius across the AI/ML ecosystem was exceptionally high. Any internet-facing application built on LangChain that accepts or loads prompt configurations from user-controlled or external sources is at direct risk of full system compromise.

Affected Systems

Package Ecosystem Vulnerable Range Patched
langchain pip No patch
135.7K OpenSSF 6.5 2.6K dependents Pushed 7d ago 17% patched ~256d to patch Full package profile →

Do you use langchain? You're affected.

Severity & Risk

CVSS 3.1
9.8 / 10
EPSS
62.2%
chance of exploitation in 30 days
Higher than 98% of all CVEs
Exploitation Status
Exploit Available
Exploitation: MEDIUM
Sophistication
Trivial
Exploitation Confidence
medium
CISA SSVC: Public PoC
Public PoC indexed (trickest/cve)
EPSS exploit prediction: 62%
Composite signal derived from CISA KEV, CISA SSVC, EPSS, trickest/cve, and Nuclei templates.

Attack Surface

AV AC PR UI S C I A
AV Network
AC Low
PR None
UI None
S Unchanged
C High
I High
A High

Recommended Action

6 steps
  1. PATCH

    Upgrade LangChain to v0.0.312 or later — this is the minimum safe version per the vendor advisory.

  2. AUDIT

    Run grep -r 'load_prompt' across all codebases to enumerate every call site.

  3. INPUT VALIDATION

    Ensure no user-controlled data reaches prompt template file paths or JSON content.

  4. SANDBOXING

    If prompt loading from external sources is required, isolate the LangChain process in a container with minimal privileges and no access to sensitive credentials.

  5. DETECTION

    Monitor for unusual subprocess spawning or outbound network connections from LangChain processes.

  6. SECRETS ROTATION

    If exposure is suspected, rotate all API keys and credentials accessible to the affected process.

CISA SSVC Assessment

Decision Attend
Exploitation poc
Automatable Yes
Technical Impact total

Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Art.15 - Accuracy, robustness and cybersecurity
ISO 42001
8.4 - AI system operation and monitoring
NIST AI RMF
MANAGE-2.2 - Mechanisms are in place to sustain the value of deployed AI systems
OWASP LLM Top 10
LLM02 - Insecure Output Handling LLM05 - Supply Chain Vulnerabilities

Frequently Asked Questions

What is CVE-2023-36281?

Any LangChain deployment on v0.0.171 or earlier that loads prompt templates from JSON files is vulnerable to unauthenticated remote code execution — no user interaction required. Update to v0.0.312+ immediately and audit all uses of load_prompt() for untrusted input paths. If you cannot patch now, disable external prompt file loading and treat prompt template sources as a trust boundary.

Is CVE-2023-36281 actively exploited?

Proof-of-concept exploit code is publicly available for CVE-2023-36281, increasing the risk of exploitation.

How to fix CVE-2023-36281?

1. PATCH: Upgrade LangChain to v0.0.312 or later — this is the minimum safe version per the vendor advisory. 2. AUDIT: Run grep -r 'load_prompt' across all codebases to enumerate every call site. 3. INPUT VALIDATION: Ensure no user-controlled data reaches prompt template file paths or JSON content. 4. SANDBOXING: If prompt loading from external sources is required, isolate the LangChain process in a container with minimal privileges and no access to sensitive credentials. 5. DETECTION: Monitor for unusual subprocess spawning or outbound network connections from LangChain processes. 6. SECRETS ROTATION: If exposure is suspected, rotate all API keys and credentials accessible to the affected process.

What systems are affected by CVE-2023-36281?

This vulnerability affects the following AI/ML architecture patterns: agent frameworks, RAG pipelines, LLM application backends, multi-agent orchestration, prompt management systems.

What is the CVSS score for CVE-2023-36281?

CVE-2023-36281 has a CVSS v3.1 base score of 9.8 (CRITICAL). The EPSS exploitation probability is 62.24%.

Technical Details

NVD Description

An issue in langchain v.0.0.171 allows a remote attacker to execute arbitrary code via a JSON file to load_prompt. This is related to __subclasses__ or a template.

Exploitation Scenario

An adversary targets a company's internal AI assistant built on LangChain v0.0.171. The application exposes an endpoint that accepts a prompt template configuration file for custom agent personas. The attacker submits a crafted JSON file containing a malicious template that leverages Python's __subclasses__() method to traverse the class hierarchy and access os.system() or subprocess.Popen(). Upon loading, LangChain evaluates the template, executing the attacker's payload — typically a reverse shell or credential harvester. The attacker now has shell access to the AI infrastructure, exfiltrates OpenAI/Anthropic API keys from environment variables, pivots to connected vector databases, and extracts proprietary RAG document stores containing sensitive business data.

Weaknesses (CWE)

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H

Timeline

Published
August 22, 2023
Last Modified
November 21, 2024
First Seen
August 22, 2023

Related Vulnerabilities