If your organization uses LangChain and exposes prompt template string customization to end users or external inputs, patch immediately to langchain-core 1.0.7 or 0.3.80. The critical distinction: this only affects apps that allow users to define template *structure*, not just fill template *variables* — audit your LangChain integrations to determine actual exposure before assuming you are safe. Template injection in LangChain can expose Python internals and potentially enable remote code execution, making this a high-priority patch for any multi-tenant or user-facing LLM application.
Risk Assessment
High risk for organizations running LangChain-based applications that accept user-defined template strings. Attack surface is narrower than a universal LangChain vulnerability — it requires the application to expose template string customization, not just variable substitution. However, organizations building AI agents, chatbot builders, or prompt customization platforms on LangChain are directly at risk. EPSS of 0.00086 suggests no active widespread exploitation yet, but SSTI-class vulnerabilities are well-understood by attackers and trivial to weaponize once the vulnerability mechanics are public. Not in CISA KEV, but LangChain's prevalence across AI production systems elevates aggregate organizational risk significantly.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| langchain-core | pip | >= 1.0.0, <= 1.0.6 | 1.0.7 |
Do you use langchain-core? You're affected.
Severity & Risk
Recommended Action
5 steps-
PATCH
Upgrade langchain-core to 1.0.7 (1.x branch) or 0.3.80 (0.3.x branch) immediately — patches are available.
-
AUDIT
Inventory all LangChain integrations and identify any code path where untrusted input reaches ChatPromptTemplate or related classes as a template *string* (not a template *variable*).
-
WORKAROUND (if patching is delayed): Enforce system-defined template strings only; user input must only fill named variables via pre-validated inputs — never pass raw user input as the template string itself.
-
DETECT
Review application logs for template syntax patterns in user inputs ({__class__}, {__mro__}, {__subclasses__}, {__globals__}); add WAF or input validation rules to block Python object traversal patterns.
-
VERIFY
After patching, re-test all user-facing prompt customization flows to confirm template injection is no longer triggerable.
CISA SSVC Assessment
Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2025-65106?
If your organization uses LangChain and exposes prompt template string customization to end users or external inputs, patch immediately to langchain-core 1.0.7 or 0.3.80. The critical distinction: this only affects apps that allow users to define template *structure*, not just fill template *variables* — audit your LangChain integrations to determine actual exposure before assuming you are safe. Template injection in LangChain can expose Python internals and potentially enable remote code execution, making this a high-priority patch for any multi-tenant or user-facing LLM application.
Is CVE-2025-65106 actively exploited?
No confirmed active exploitation of CVE-2025-65106 has been reported, but organizations should still patch proactively.
How to fix CVE-2025-65106?
1. PATCH: Upgrade langchain-core to 1.0.7 (1.x branch) or 0.3.80 (0.3.x branch) immediately — patches are available. 2. AUDIT: Inventory all LangChain integrations and identify any code path where untrusted input reaches ChatPromptTemplate or related classes as a template *string* (not a template *variable*). 3. WORKAROUND (if patching is delayed): Enforce system-defined template strings only; user input must only fill named variables via pre-validated inputs — never pass raw user input as the template string itself. 4. DETECT: Review application logs for template syntax patterns in user inputs ({__class__}, {__mro__}, {__subclasses__}, {__globals__}); add WAF or input validation rules to block Python object traversal patterns. 5. VERIFY: After patching, re-test all user-facing prompt customization flows to confirm template injection is no longer triggerable.
What systems are affected by CVE-2025-65106?
This vulnerability affects the following AI/ML architecture patterns: agent frameworks, RAG pipelines, LLM application builders, prompt management systems, chatbot platforms.
What is the CVSS score for CVE-2025-65106?
No CVSS score has been assigned yet.
Technical Details
NVD Description
LangChain is a framework for building agents and LLM-powered applications. From versions 0.3.79 and prior and 1.0.0 to 1.0.6, a template injection vulnerability exists in LangChain's prompt template system that allows attackers to access Python object internals through template syntax. This vulnerability affects applications that accept untrusted template strings (not just template variables) in ChatPromptTemplate and related prompt template classes. This issue has been patched in versions 0.3.80 and 1.0.7.
Exploitation Scenario
An attacker targeting a LangChain-based AI chatbot builder — a common SaaS product — where end users create custom AI assistants with custom system prompts. The attacker creates a free account and, in the 'Custom System Prompt' field, enters a template string such as: 'You are helpful. {system.__class__.__mro__[1].__subclasses__()}'. The vulnerable ChatPromptTemplate processes this as live Python template syntax, returning internal class hierarchy data in the rendered prompt output. The attacker iterates to locate os.environ access, extracting Stripe API keys, LLM API keys, and database credentials embedded in server environment variables. In a targeted scenario against an enterprise deployment, this pivot grants access to the underlying infrastructure, compromising multiple tenant environments from a single unprivileged account.
Weaknesses (CWE)
References
- github.com/advisories/GHSA-6qv9-48xg-fc7f
- nvd.nist.gov/vuln/detail/CVE-2025-65106
- github.com/langchain-ai/langchain/commit/c4b6ba254e1a49ed91f2e268e6484011c540542a
- github.com/langchain-ai/langchain/commit/fa7789d6c21222b85211755d822ef698d3b34e00
- github.com/langchain-ai/langchain/security/advisories/GHSA-6qv9-48xg-fc7f
Timeline
Related Vulnerabilities
CVE-2026-44843 8.2 LangChain: deserialization poisons LLM chat history
Same package: langchain-core CVE-2025-68664 8.2 langchain-core: Deserialization enables RCE
Same package: langchain-core CVE-2026-34070 7.5 langchain-core: path traversal exposes host secrets via prompt config
Same package: langchain-core GHSA-926x-3r5x-gfhw 5.3 LangChain: f-string template injection exposes object internals
Same package: langchain-core CVE-2024-10940 5.3 langchain-core: file read via prompt template inputs
Same package: langchain-core
AI Threat Alert