CVE-2026-40087: LangChain: template injection leaks object attributes

MEDIUM
Published April 9, 2026
CISO Take

LangChain's DictPromptTemplate and ImagePromptTemplate classes failed to enforce attribute-access validation, allowing an attacker to inject f-string expressions like {obj.__class__.__init__.__globals__} that Python's formatter evaluates at runtime — potentially exposing API keys, environment variables, or application secrets embedded in the runtime object graph. LangChain is the most widely deployed LLM framework in production, and the attack requires no authentication and zero user interaction (CVSS AV:N/AC:L/PR:N/UI:N), making any application that passes user-controlled strings into these template classes immediately reachable; however, CISA KEV lists no active exploitation and no public exploits or scanner templates exist as of this writing. Patch langchain-core to ≥0.3.84 (0.x branch) or ≥1.2.28 (1.x branch) and audit all DictPromptTemplate and ImagePromptTemplate usage for user-supplied template string inputs.

Sources: NVD GitHub Advisory ATLAS

Risk Assessment

Medium risk (CVSS 5.3) with an elevated operational concern due to LangChain's ubiquity across AI stacks. Low attack complexity and no authentication requirement lower the bar for exploitation significantly — any internet-exposed LangChain application accepting user-defined templates is in scope. The immediate impact is confidentiality only (no integrity or availability), but in AI agent contexts attribute traversal can reach tool configurations, memory stores, and credential objects. No CISA KEV listing, no EPSS data, and no public exploits reduce urgency compared to a critical, but the ease of exploitation and framework prevalence keep this in the 'patch this sprint' category.

Attack Kill Chain

Template Crafting
Adversary crafts a malicious f-string template embedding Python attribute traversal expressions (e.g., {obj.__class__.__init__.__globals__[os].environ[SECRET]}) targeting DictPromptTemplate or ImagePromptTemplate input fields.
AML.T0065
Validation Bypass
The crafted template passes LangChain's incomplete field-name validation because attribute-access and nested-specifier checks are not enforced in the affected template classes.
AML.T0049
Runtime Evaluation
When the application formats the template with a legitimate query, Python's str.format() resolves the injected attribute chain at runtime, traversing the application's live object graph.
AML.T0051.000
Data Exfiltration
Sensitive application state — API keys, environment variables, database credentials, or internal configuration — is embedded in the formatted prompt string and exposed through LLM output, application logs, or API responses.
AML.T0057

Affected Systems

Package Ecosystem Vulnerable Range Patched
langchain pip No patch
132.4K OpenSSF 6.2 2.4K dependents Pushed 5d ago 15% patched ~321d to patch Full package profile →
langchain-core pip No patch
132.4K OpenSSF 6.2 Pushed 5d ago 86% patched ~23d to patch Full package profile →

Severity & Risk

CVSS 3.1
5.3 / 10
EPSS
N/A
Exploitation Status
No known exploitation
Sophistication
Trivial

Attack Surface

AV AC PR UI S C I A
AV Network
AC Low
PR None
UI None
S Unchanged
C Low
I None
A None

Recommended Action

  1. Patch: Upgrade langchain-core to ≥0.3.84 (0.x branch) or ≥1.2.28 (1.x branch); verify via pip show langchain-core.
  2. Audit: Search codebase for DictPromptTemplate and ImagePromptTemplate instantiations — flag every instance where the template string parameter originates from user input, environment variables read at runtime, or external data sources.
  3. Workaround if patching is delayed: validate template strings with a strict allowlist regex that rejects any curly-brace expression containing dot notation, bracket indexing, or nested braces (pattern: r'\{[^}]*[.\[][^}]*\}' or nested \{.*\{).
  4. Detection: Monitor LLM prompt logs and application output for Python object repr() strings, __dict__ dumps, or unexpected serializations that indicate attribute traversal succeeded.
  5. Defense-in-depth: Never pass raw user input as a template string — use PromptTemplate with validated variable substitution only.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Article 9 - Risk management system
ISO 42001
8.4 - AI system operation — data and input quality controls
NIST AI RMF
MAP-5.2 - AI system practices and personnel are evaluated for trustworthiness
OWASP LLM Top 10
LLM01:2025 - Prompt Injection LLM02:2025 - Sensitive Information Disclosure

Frequently Asked Questions

What is CVE-2026-40087?

LangChain's DictPromptTemplate and ImagePromptTemplate classes failed to enforce attribute-access validation, allowing an attacker to inject f-string expressions like {obj.__class__.__init__.__globals__} that Python's formatter evaluates at runtime — potentially exposing API keys, environment variables, or application secrets embedded in the runtime object graph. LangChain is the most widely deployed LLM framework in production, and the attack requires no authentication and zero user interaction (CVSS AV:N/AC:L/PR:N/UI:N), making any application that passes user-controlled strings into these template classes immediately reachable; however, CISA KEV lists no active exploitation and no public exploits or scanner templates exist as of this writing. Patch langchain-core to ≥0.3.84 (0.x branch) or ≥1.2.28 (1.x branch) and audit all DictPromptTemplate and ImagePromptTemplate usage for user-supplied template string inputs.

Is CVE-2026-40087 actively exploited?

No confirmed active exploitation of CVE-2026-40087 has been reported, but organizations should still patch proactively.

How to fix CVE-2026-40087?

1. Patch: Upgrade langchain-core to ≥0.3.84 (0.x branch) or ≥1.2.28 (1.x branch); verify via pip show langchain-core. 2. Audit: Search codebase for DictPromptTemplate and ImagePromptTemplate instantiations — flag every instance where the template string parameter originates from user input, environment variables read at runtime, or external data sources. 3. Workaround if patching is delayed: validate template strings with a strict allowlist regex that rejects any curly-brace expression containing dot notation, bracket indexing, or nested braces (pattern: r'\{[^}]*[.\[][^}]*\}' or nested \{.*\{). 4. Detection: Monitor LLM prompt logs and application output for Python object repr() strings, __dict__ dumps, or unexpected serializations that indicate attribute traversal succeeded. 5. Defense-in-depth: Never pass raw user input as a template string — use PromptTemplate with validated variable substitution only.

What systems are affected by CVE-2026-40087?

This vulnerability affects the following AI/ML architecture patterns: agent frameworks, RAG pipelines, LLM application backends, chatbot implementations.

What is the CVSS score for CVE-2026-40087?

CVE-2026-40087 has a CVSS v3.1 base score of 5.3 (MEDIUM).

Technical Details

NVD Description

LangChain is a framework for building agents and LLM-powered applications. Prior to 0.3.84 and 1.2.28, LangChain's f-string prompt-template validation was incomplete in two respects. First, some prompt template classes accepted f-string templates and formatted them without enforcing the same attribute-access validation as PromptTemplate. In particular, DictPromptTemplate and ImagePromptTemplate could accept templates containing attribute access or indexing expressions and subsequently evaluate those expressions during formatting. Second, f-string validation based on parsed top-level field names did not reject nested replacement fields inside format specifiers. In this pattern, the nested replacement field appears in the format specifier rather than in the top-level field name. As a result, earlier validation based on parsed field names did not reject the template even though Python formatting would still attempt to resolve the nested expression at runtime. This vulnerability is fixed in 0.3.84 and 1.2.28.

Exploitation Scenario

An adversary targets a customer-facing LangChain application that lets users customize their assistant's prompt template. The attacker submits a template string such as 'Answer: {query.__class__.__init__.__globals__[os].environ[OPENAI_API_KEY]}' through the UI. Because DictPromptTemplate skips attribute-access validation, LangChain accepts the template without rejection. When a legitimate query is processed, Python's str.format() resolves the chained attribute traversal at runtime, injecting the application's OpenAI API key directly into the formatted prompt. The key appears in LLM context, application logs, or is returned in the response — giving the attacker valid credentials to the AI backend at zero cost.

Weaknesses (CWE)

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:L/I:N/A:N

Timeline

Published
April 9, 2026
Last Modified
April 9, 2026
First Seen
April 9, 2026

Related Vulnerabilities