CVE-2026-40087: LangChain: template injection leaks object attributes
MEDIUMLangChain's DictPromptTemplate and ImagePromptTemplate classes failed to enforce attribute-access validation, allowing an attacker to inject f-string expressions like {obj.__class__.__init__.__globals__} that Python's formatter evaluates at runtime — potentially exposing API keys, environment variables, or application secrets embedded in the runtime object graph. LangChain is the most widely deployed LLM framework in production, and the attack requires no authentication and zero user interaction (CVSS AV:N/AC:L/PR:N/UI:N), making any application that passes user-controlled strings into these template classes immediately reachable; however, CISA KEV lists no active exploitation and no public exploits or scanner templates exist as of this writing. Patch langchain-core to ≥0.3.84 (0.x branch) or ≥1.2.28 (1.x branch) and audit all DictPromptTemplate and ImagePromptTemplate usage for user-supplied template string inputs.
Risk Assessment
Medium risk (CVSS 5.3) with an elevated operational concern due to LangChain's ubiquity across AI stacks. Low attack complexity and no authentication requirement lower the bar for exploitation significantly — any internet-exposed LangChain application accepting user-defined templates is in scope. The immediate impact is confidentiality only (no integrity or availability), but in AI agent contexts attribute traversal can reach tool configurations, memory stores, and credential objects. No CISA KEV listing, no EPSS data, and no public exploits reduce urgency compared to a critical, but the ease of exploitation and framework prevalence keep this in the 'patch this sprint' category.
Attack Kill Chain
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| langchain | pip | — | No patch |
| langchain-core | pip | — | No patch |
Severity & Risk
Attack Surface
Recommended Action
- Patch: Upgrade langchain-core to ≥0.3.84 (0.x branch) or ≥1.2.28 (1.x branch); verify via pip show langchain-core.
- Audit: Search codebase for DictPromptTemplate and ImagePromptTemplate instantiations — flag every instance where the template string parameter originates from user input, environment variables read at runtime, or external data sources.
- Workaround if patching is delayed: validate template strings with a strict allowlist regex that rejects any curly-brace expression containing dot notation, bracket indexing, or nested braces (pattern: r'\{[^}]*[.\[][^}]*\}' or nested \{.*\{).
- Detection: Monitor LLM prompt logs and application output for Python object repr() strings, __dict__ dumps, or unexpected serializations that indicate attribute traversal succeeded.
- Defense-in-depth: Never pass raw user input as a template string — use PromptTemplate with validated variable substitution only.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2026-40087?
LangChain's DictPromptTemplate and ImagePromptTemplate classes failed to enforce attribute-access validation, allowing an attacker to inject f-string expressions like {obj.__class__.__init__.__globals__} that Python's formatter evaluates at runtime — potentially exposing API keys, environment variables, or application secrets embedded in the runtime object graph. LangChain is the most widely deployed LLM framework in production, and the attack requires no authentication and zero user interaction (CVSS AV:N/AC:L/PR:N/UI:N), making any application that passes user-controlled strings into these template classes immediately reachable; however, CISA KEV lists no active exploitation and no public exploits or scanner templates exist as of this writing. Patch langchain-core to ≥0.3.84 (0.x branch) or ≥1.2.28 (1.x branch) and audit all DictPromptTemplate and ImagePromptTemplate usage for user-supplied template string inputs.
Is CVE-2026-40087 actively exploited?
No confirmed active exploitation of CVE-2026-40087 has been reported, but organizations should still patch proactively.
How to fix CVE-2026-40087?
1. Patch: Upgrade langchain-core to ≥0.3.84 (0.x branch) or ≥1.2.28 (1.x branch); verify via pip show langchain-core. 2. Audit: Search codebase for DictPromptTemplate and ImagePromptTemplate instantiations — flag every instance where the template string parameter originates from user input, environment variables read at runtime, or external data sources. 3. Workaround if patching is delayed: validate template strings with a strict allowlist regex that rejects any curly-brace expression containing dot notation, bracket indexing, or nested braces (pattern: r'\{[^}]*[.\[][^}]*\}' or nested \{.*\{). 4. Detection: Monitor LLM prompt logs and application output for Python object repr() strings, __dict__ dumps, or unexpected serializations that indicate attribute traversal succeeded. 5. Defense-in-depth: Never pass raw user input as a template string — use PromptTemplate with validated variable substitution only.
What systems are affected by CVE-2026-40087?
This vulnerability affects the following AI/ML architecture patterns: agent frameworks, RAG pipelines, LLM application backends, chatbot implementations.
What is the CVSS score for CVE-2026-40087?
CVE-2026-40087 has a CVSS v3.1 base score of 5.3 (MEDIUM).
Technical Details
NVD Description
LangChain is a framework for building agents and LLM-powered applications. Prior to 0.3.84 and 1.2.28, LangChain's f-string prompt-template validation was incomplete in two respects. First, some prompt template classes accepted f-string templates and formatted them without enforcing the same attribute-access validation as PromptTemplate. In particular, DictPromptTemplate and ImagePromptTemplate could accept templates containing attribute access or indexing expressions and subsequently evaluate those expressions during formatting. Second, f-string validation based on parsed top-level field names did not reject nested replacement fields inside format specifiers. In this pattern, the nested replacement field appears in the format specifier rather than in the top-level field name. As a result, earlier validation based on parsed field names did not reject the template even though Python formatting would still attempt to resolve the nested expression at runtime. This vulnerability is fixed in 0.3.84 and 1.2.28.
Exploitation Scenario
An adversary targets a customer-facing LangChain application that lets users customize their assistant's prompt template. The attacker submits a template string such as 'Answer: {query.__class__.__init__.__globals__[os].environ[OPENAI_API_KEY]}' through the UI. Because DictPromptTemplate skips attribute-access validation, LangChain accepts the template without rejection. When a legitimate query is processed, Python's str.format() resolves the chained attribute traversal at runtime, injecting the application's OpenAI API key directly into the formatted prompt. The key appears in LLM context, application logs, or is returned in the response — giving the attacker valid credentials to the AI backend at zero cost.
Weaknesses (CWE)
CVSS Vector
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:L/I:N/A:N References
- github.com/langchain-ai/langchain/commit/6bab0ba3c12328008ddca3e0d54ff5a6151cd27b
- github.com/langchain-ai/langchain/commit/af2ed47c6f008cdd551f3c0d87db3774c8dfe258
- github.com/langchain-ai/langchain/pull/36612
- github.com/langchain-ai/langchain/pull/36613
- github.com/langchain-ai/langchain/releases/tag/langchain-core%3D%3D0.3.84
- github.com/langchain-ai/langchain/releases/tag/langchain-core%3D%3D1.2.28
- github.com/langchain-ai/langchain/security/advisories/GHSA-926x-3r5x-gfhw
Timeline
Related Vulnerabilities
CVE-2025-68664 8.2 langchain-core: Deserialization enables RCE
Same package: langchain-core CVE-2026-34070 7.5 langchain-core: path traversal exposes host secrets via prompt config
Same package: langchain-core CVE-2024-10940 5.3 langchain-core: file read via prompt template inputs
Same package: langchain-core GHSA-926x-3r5x-gfhw 5.3 LangChain: f-string template injection exposes object internals
Same package: langchain-core CVE-2026-26013 3.7 langchain-core: SSRF allows internal network access
Same package: langchain-core
AI Threat Alert