CVE-2025-65106

GHSA-6qv9-48xg-fc7f HIGH
Published November 21, 2025
CISO Take

If your organization uses LangChain and exposes prompt template string customization to end users or external inputs, patch immediately to langchain-core 1.0.7 or 0.3.80. The critical distinction: this only affects apps that allow users to define template *structure*, not just fill template *variables* — audit your LangChain integrations to determine actual exposure before assuming you are safe. Template injection in LangChain can expose Python internals and potentially enable remote code execution, making this a high-priority patch for any multi-tenant or user-facing LLM application.

Affected Systems

Package Ecosystem Vulnerable Range Patched
langchain-core pip >= 1.0.0, <= 1.0.6 1.0.7

Do you use langchain-core? You're affected.

Severity & Risk

CVSS 3.1
N/A
EPSS
0.1%
chance of exploitation in 30 days
KEV Status
Not in KEV
Sophistication
Moderate

Recommended Action

  1. 1. PATCH: Upgrade langchain-core to 1.0.7 (1.x branch) or 0.3.80 (0.3.x branch) immediately — patches are available. 2. AUDIT: Inventory all LangChain integrations and identify any code path where untrusted input reaches ChatPromptTemplate or related classes as a template *string* (not a template *variable*). 3. WORKAROUND (if patching is delayed): Enforce system-defined template strings only; user input must only fill named variables via pre-validated inputs — never pass raw user input as the template string itself. 4. DETECT: Review application logs for template syntax patterns in user inputs ({__class__}, {__mro__}, {__subclasses__}, {__globals__}); add WAF or input validation rules to block Python object traversal patterns. 5. VERIFY: After patching, re-test all user-facing prompt customization flows to confirm template injection is no longer triggerable.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Article 15 - Accuracy, robustness and cybersecurity Article 9 - Risk management system
ISO 42001
A.6.1.2 - Risk assessment for AI systems A.9.2 - AI System Testing and Validation A.9.4 - AI system operation and monitoring
NIST AI RMF
GOVERN 6.1 - Policies and procedures for AI risk management MANAGE 2.2 - Mechanisms to sustain AI risk management activities over the AI system lifecycle MEASURE 2.5 - AI Risk and Trustworthiness Measurement
OWASP LLM Top 10
LLM01:2025 - Prompt Injection LLM02:2025 - Sensitive Information Disclosure LLM05:2025 - Improper Output Handling

Technical Details

NVD Description

LangChain is a framework for building agents and LLM-powered applications. From versions 0.3.79 and prior and 1.0.0 to 1.0.6, a template injection vulnerability exists in LangChain's prompt template system that allows attackers to access Python object internals through template syntax. This vulnerability affects applications that accept untrusted template strings (not just template variables) in ChatPromptTemplate and related prompt template classes. This issue has been patched in versions 0.3.80 and 1.0.7.

Exploitation Scenario

An attacker targeting a LangChain-based AI chatbot builder — a common SaaS product — where end users create custom AI assistants with custom system prompts. The attacker creates a free account and, in the 'Custom System Prompt' field, enters a template string such as: 'You are helpful. {system.__class__.__mro__[1].__subclasses__()}'. The vulnerable ChatPromptTemplate processes this as live Python template syntax, returning internal class hierarchy data in the rendered prompt output. The attacker iterates to locate os.environ access, extracting Stripe API keys, LLM API keys, and database credentials embedded in server environment variables. In a targeted scenario against an enterprise deployment, this pivot grants access to the underlying infrastructure, compromising multiple tenant environments from a single unprivileged account.

Timeline

Published
November 21, 2025
Last Modified
December 9, 2025
First Seen
November 21, 2025