CVE-2023-36281: LangChain: RCE via malicious JSON prompt template
CRITICAL PoC AVAILABLE CISA: ATTENDAny LangChain deployment on v0.0.171 or earlier that loads prompt templates from JSON files is vulnerable to unauthenticated remote code execution — no user interaction required. Update to v0.0.312+ immediately and audit all uses of load_prompt() for untrusted input paths. If you cannot patch now, disable external prompt file loading and treat prompt template sources as a trust boundary.
Risk Assessment
Severity is maximal: CVSS 9.8 with network-accessible, zero-authentication, zero-interaction exploitation. The __subclasses__ Python class traversal technique is well-documented and PoC code is publicly available, making this trivially exploitable by script-kiddies. LangChain was the dominant LLM framework at time of disclosure, meaning blast radius across the AI/ML ecosystem was exceptionally high. Any internet-facing application built on LangChain that accepts or loads prompt configurations from user-controlled or external sources is at direct risk of full system compromise.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| langchain | pip | — | No patch |
Do you use langchain? You're affected.
Severity & Risk
Attack Surface
Recommended Action
6 steps-
PATCH
Upgrade LangChain to v0.0.312 or later — this is the minimum safe version per the vendor advisory.
-
AUDIT
Run grep -r 'load_prompt' across all codebases to enumerate every call site.
-
INPUT VALIDATION
Ensure no user-controlled data reaches prompt template file paths or JSON content.
-
SANDBOXING
If prompt loading from external sources is required, isolate the LangChain process in a container with minimal privileges and no access to sensitive credentials.
-
DETECTION
Monitor for unusual subprocess spawning or outbound network connections from LangChain processes.
-
SECRETS ROTATION
If exposure is suspected, rotate all API keys and credentials accessible to the affected process.
CISA SSVC Assessment
Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2023-36281?
Any LangChain deployment on v0.0.171 or earlier that loads prompt templates from JSON files is vulnerable to unauthenticated remote code execution — no user interaction required. Update to v0.0.312+ immediately and audit all uses of load_prompt() for untrusted input paths. If you cannot patch now, disable external prompt file loading and treat prompt template sources as a trust boundary.
Is CVE-2023-36281 actively exploited?
Proof-of-concept exploit code is publicly available for CVE-2023-36281, increasing the risk of exploitation.
How to fix CVE-2023-36281?
1. PATCH: Upgrade LangChain to v0.0.312 or later — this is the minimum safe version per the vendor advisory. 2. AUDIT: Run grep -r 'load_prompt' across all codebases to enumerate every call site. 3. INPUT VALIDATION: Ensure no user-controlled data reaches prompt template file paths or JSON content. 4. SANDBOXING: If prompt loading from external sources is required, isolate the LangChain process in a container with minimal privileges and no access to sensitive credentials. 5. DETECTION: Monitor for unusual subprocess spawning or outbound network connections from LangChain processes. 6. SECRETS ROTATION: If exposure is suspected, rotate all API keys and credentials accessible to the affected process.
What systems are affected by CVE-2023-36281?
This vulnerability affects the following AI/ML architecture patterns: agent frameworks, RAG pipelines, LLM application backends, multi-agent orchestration, prompt management systems.
What is the CVSS score for CVE-2023-36281?
CVE-2023-36281 has a CVSS v3.1 base score of 9.8 (CRITICAL). The EPSS exploitation probability is 62.24%.
Technical Details
NVD Description
An issue in langchain v.0.0.171 allows a remote attacker to execute arbitrary code via a JSON file to load_prompt. This is related to __subclasses__ or a template.
Exploitation Scenario
An adversary targets a company's internal AI assistant built on LangChain v0.0.171. The application exposes an endpoint that accepts a prompt template configuration file for custom agent personas. The attacker submits a crafted JSON file containing a malicious template that leverages Python's __subclasses__() method to traverse the class hierarchy and access os.system() or subprocess.Popen(). Upon loading, LangChain evaluates the template, executing the attacker's payload — typically a reverse shell or credential harvester. The attacker now has shell access to the AI infrastructure, exfiltrates OpenAI/Anthropic API keys from environment variables, pivots to connected vector databases, and extracts proprietary RAG document stores containing sensitive business data.
Weaknesses (CWE)
CVSS Vector
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H References
- aisec.today/LangChain-2e6244a313dd46139c5ef28cbcab9e55 Exploit 3rd Party
- github.com/hwchase17/langchain/issues/4394 Exploit Issue Vendor
- github.com/langchain-ai/langchain/releases/tag/v0.0.312
- github.com/miguelc49/CVE-2023-36281-1 Exploit
- github.com/miguelc49/CVE-2023-36281-2 Exploit
- github.com/nomi-sec/PoC-in-GitHub Exploit
- github.com/tagomaru/CVE-2023-36281 Exploit
Timeline
Related Vulnerabilities
CVE-2025-2828 10.0 LangChain RequestsToolkit: SSRF exposes cloud metadata
Same package: langchain CVE-2023-34541 9.8 LangChain: RCE via unsafe load_prompt deserialization
Same package: langchain CVE-2023-29374 9.8 LangChain: RCE via prompt injection in LLMMathChain
Same package: langchain CVE-2023-34540 9.8 LangChain: RCE via JiraAPIWrapper crafted input
Same package: langchain CVE-2023-36258 9.8 LangChain: unauthenticated RCE via code injection
Same package: langchain
AI Threat Alert