If your organization uses LangChain and exposes prompt template string customization to end users or external inputs, patch immediately to langchain-core 1.0.7 or 0.3.80. The critical distinction: this only affects apps that allow users to define template *structure*, not just fill template *variables* — audit your LangChain integrations to determine actual exposure before assuming you are safe. Template injection in LangChain can expose Python internals and potentially enable remote code execution, making this a high-priority patch for any multi-tenant or user-facing LLM application.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| langchain-core | pip | >= 1.0.0, <= 1.0.6 | 1.0.7 |
Do you use langchain-core? You're affected.
Severity & Risk
Recommended Action
- 1. PATCH: Upgrade langchain-core to 1.0.7 (1.x branch) or 0.3.80 (0.3.x branch) immediately — patches are available. 2. AUDIT: Inventory all LangChain integrations and identify any code path where untrusted input reaches ChatPromptTemplate or related classes as a template *string* (not a template *variable*). 3. WORKAROUND (if patching is delayed): Enforce system-defined template strings only; user input must only fill named variables via pre-validated inputs — never pass raw user input as the template string itself. 4. DETECT: Review application logs for template syntax patterns in user inputs ({__class__}, {__mro__}, {__subclasses__}, {__globals__}); add WAF or input validation rules to block Python object traversal patterns. 5. VERIFY: After patching, re-test all user-facing prompt customization flows to confirm template injection is no longer triggerable.
Classification
Compliance Impact
This CVE is relevant to:
Technical Details
NVD Description
LangChain is a framework for building agents and LLM-powered applications. From versions 0.3.79 and prior and 1.0.0 to 1.0.6, a template injection vulnerability exists in LangChain's prompt template system that allows attackers to access Python object internals through template syntax. This vulnerability affects applications that accept untrusted template strings (not just template variables) in ChatPromptTemplate and related prompt template classes. This issue has been patched in versions 0.3.80 and 1.0.7.
Exploitation Scenario
An attacker targeting a LangChain-based AI chatbot builder — a common SaaS product — where end users create custom AI assistants with custom system prompts. The attacker creates a free account and, in the 'Custom System Prompt' field, enters a template string such as: 'You are helpful. {system.__class__.__mro__[1].__subclasses__()}'. The vulnerable ChatPromptTemplate processes this as live Python template syntax, returning internal class hierarchy data in the rendered prompt output. The attacker iterates to locate os.environ access, extracting Stripe API keys, LLM API keys, and database credentials embedded in server environment variables. In a targeted scenario against an enterprise deployment, this pivot grants access to the underlying infrastructure, compromising multiple tenant environments from a single unprivileged account.
Weaknesses (CWE)
References
- github.com/advisories/GHSA-6qv9-48xg-fc7f
- github.com/langchain-ai/langchain/commit/c4b6ba254e1a49ed91f2e268e6484011c540542a
- github.com/langchain-ai/langchain/commit/fa7789d6c21222b85211755d822ef698d3b34e00
- github.com/langchain-ai/langchain/security/advisories/GHSA-6qv9-48xg-fc7f
- nvd.nist.gov/vuln/detail/CVE-2025-65106
- github.com/langchain-ai/langchain/commit/c4b6ba254e1a49ed91f2e268e6484011c540542a
- github.com/langchain-ai/langchain/commit/fa7789d6c21222b85211755d822ef698d3b34e00
- github.com/langchain-ai/langchain/security/advisories/GHSA-6qv9-48xg-fc7f