GHSA-926x-3r5x-gfhw: LangChain: f-string template injection exposes object internals

GHSA-926x-3r5x-gfhw MEDIUM
Published April 8, 2026
CISO Take

LangChain's DictPromptTemplate and ImagePromptTemplate classes failed to enforce attribute-access and indexing restrictions on f-string templates, allowing crafted template strings like "{message.additional_kwargs[secret]}" to traverse Python object internals during formatting and surface sensitive data in prompt output, model context, or logs. Impact is conditional — exploitation requires both attacker control over the template structure itself and the application passing rich Python objects (not just strings or numbers) into formatting, a combination typical of no-code LLM workflow builders or multi-tenant prompt authoring platforms. With CVSS 5.3, no public exploit, and no CISA KEV listing, this is a medium-priority finding, but LangChain's ubiquity in enterprise AI stacks elevates collective exposure. Teams using langchain-core < 0.3.83 with user-authored templates should upgrade to 0.3.84; applications with developer-controlled hardcoded templates require no action.

Sources: GitHub Advisory OpenSSF ATLAS

Risk Assessment

Medium risk. CVSS 5.3 (AV:N/AC:L/PR:N/UI:N) reflects network-reachable exposure with low attack complexity once preconditions are met. However, dual preconditions — untrusted template authoring AND rich Python objects in scope — significantly constrain the exploitable population. No public exploit or scanner template exists. OpenSSF Scorecard of 6.2/10 and 5 prior CVEs in langchain-core indicate a pattern of security debt in this dependency. EPSS data unavailable, but the conditional nature makes opportunistic mass exploitation unlikely in the near term.

Affected Systems

Package Ecosystem Vulnerable Range Patched
langchain-core pip < 0.3.83 0.3.84
132.4K OpenSSF 6.2 Pushed 4d ago 100% patched ~23d to patch Full package profile →

Do you use langchain-core? You're affected.

Severity & Risk

CVSS 3.1
5.3 / 10
EPSS
N/A
Exploitation Status
No known exploitation
Sophistication
Moderate

Recommended Action

  1. PATCH: Upgrade langchain-core to >= 0.3.84 immediately (fixes both DictPromptTemplate/ImagePromptTemplate validation and nested format-specifier bypass).
  2. AUDIT: Identify all application paths where end users can supply template strings rather than only template variable values.
  3. RESTRICT: If user-authored templates are required, enforce allowlisting — only expose simple primitive values (str, int, float) to user-controlled template slots; never pass Message, Document, or complex internal objects.
  4. DETECT: Review application logs for template strings containing dot-notation or bracket expressions (e.g., regex: `\{[^}]+[.\[][^}]+\}`).
  5. HARDEN: Consider templating sandboxes (Jinja2 with sandbox environment, restricted eval) if user template authoring is a core feature.
  6. VERIFY: Confirm langchain-community and langchain packages are also updated, as they transitively depend on langchain-core.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Article 15 - Accuracy, robustness and cybersecurity
ISO 42001
A.6.2.5 - AI system input validation controls
NIST AI RMF
MANAGE 2.2 - Mechanisms to sustain the value of deployed AI
OWASP LLM Top 10
LLM01:2025 - Prompt Injection LLM02:2025 - Sensitive Information Disclosure

Technical Details

NVD Description

LangChain's f-string prompt-template validation was incomplete in two respects. First, some prompt template classes accepted f-string templates and formatted them without enforcing the same attribute-access validation as `PromptTemplate`. In particular, `DictPromptTemplate` and `ImagePromptTemplate` could accept templates containing attribute access or indexing expressions and subsequently evaluate those expressions during formatting. Examples of the affected shape include: ```python "{message.additional_kwargs[secret]}" "https://example.com/{image.__class__.__name__}.png" ``` Second, f-string validation based on parsed top-level field names did not reject nested replacement fields inside format specifiers. For example: ```python "{name:{name.__class__.__name__}}" ``` In this pattern, the nested replacement field appears in the format specifier rather than in the top-level field name. As a result, earlier validation based on parsed field names did not reject the template even though Python formatting would still attempt to resolve the nested expression at runtime. ## Affected usage This issue is only relevant for applications that accept untrusted template strings, rather than only untrusted template variable values. In addition, practical impact depends on what objects are passed into template formatting: - If applications only format simple values such as strings and numbers, impact is limited and may only result in formatting errors. - If applications format richer Python objects, attribute access and indexing may interact with internal object state during formatting. In many deployments, these conditions are not commonly present together. Applications that allow end users to author arbitrary templates often expose only a narrow set of simple template variables, while applications that work with richer internal Python objects often keep template structure under developer control. As a result, the highest-impact scenario is plausible but is not representative of all LangChain applications. Applications that use hardcoded templates or that only allow users to provide variable values are not affected by this issue. ## Impact The direct issue in `DictPromptTemplate` and `ImagePromptTemplate` allowed attribute access and indexing expressions to survive template construction and then be evaluated during formatting. When richer Python objects were passed into formatting, this could expose internal fields or nested data to prompt output, model context, or logs. The nested format-spec issue is narrower in scope. It bypassed the intended validation rules for f-string templates, but in simple cases it results in an invalid format specifier error rather than direct disclosure. Accordingly, its practical impact is lower than that of direct top-level attribute traversal. Overall, the practical severity depends on deployment. Meaningful confidentiality impact requires attacker control over the template structure itself, and higher impact further depends on the surrounding application passing richer internal Python objects into formatting. ## Fix The fix consists of two changes. First, LangChain now applies f-string safety validation consistently to `DictPromptTemplate` and `ImagePromptTemplate`, so templates containing attribute access or indexing expressions are rejected during construction and deserialization. Second, LangChain now rejects nested replacement fields inside f-string format specifiers. Concretely, LangChain validates parsed f-string fields and raises an error for: - variable names containing attribute access or indexing syntax such as `.` or `[]` - format specifiers containing `{` or `}` This blocks templates such as: ```python "{message.additional_kwargs[secret]}" "https://example.com/{image.__class__.__name__}.png" "{name:{name.__class__.__name__}}" ``` The fix preserves ordinary f-string formatting features such as standard format specifiers and conversions, including examples like: ```python "{value:.2f}" "{value:>10}" "{value!r}" ``` In addition, the explicit template-validation path now applies the same structural f-string checks before performing placeholder validation, ensuring that the security checks and validation checks remain aligned.

Exploitation Scenario

An adversary targeting a multi-tenant LLM application platform — such as a customer-facing AI assistant builder where users define their own system prompts — submits the template string `"{message.additional_kwargs[api_key]}"` as their custom prompt. The platform's backend, unaware of the validation gap in DictPromptTemplate, constructs the template without rejecting the attribute access expression. When a legitimate user's session invokes the assistant and LangChain formats the prompt using a LangChain HumanMessage object that carries internal state in additional_kwargs, the expression resolves and the API key value is injected into the prompt text. The LLM echoes or processes it, and the attacker retrieves the exfiltrated credential from the model's response or from a shared session log. An alternative vector targets ImagePromptTemplate in applications processing user-uploaded images, using `"{image.__class__.__name__}"` to probe object internals and enumerate internal class structure for follow-on exploitation.

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:L/I:N/A:N

Timeline

Published
April 8, 2026
Last Modified
April 8, 2026
First Seen
April 9, 2026

Related Vulnerabilities