LangChain SSRF in token counting for vision models allows unauthenticated attackers to trigger internal network requests by supplying malicious image URLs in multimodal inputs. CVSS 3.7 understates real-world risk: in cloud-hosted AI applications, SSRF reaches cloud metadata services (AWS IMDSv1, GCP), enabling credential theft beyond the stated availability-only impact. Patch to langchain-core >= 1.2.11 now; any public-facing LangChain app accepting vision inputs is exposed.
Risk Assessment
Effective risk is MEDIUM despite Low CVSS. AC:H is questionable in practice — any application accepting user-supplied multimodal messages satisfies the attack condition with no special setup. The CVSS vector only credits A:L, but SSRF in cloud environments routinely yields credential exfiltration via IMDSv1 endpoints, misrepresenting the confidentiality impact. Not in KEV and no known active exploitation, but the exploitation path is trivial once the condition is met. Exposure is broad: LangChain is the dominant LLM framework, and vision model usage is growing rapidly.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| langchain-core | pip | < 1.2.11 | 1.2.11 |
| langchain_core | pip | — | No patch |
Severity & Risk
Attack Surface
Recommended Action
5 steps-
Patch: Upgrade langchain-core to >= 1.2.11 immediately. Run
pip show langchain-coreto confirm version. -
Workaround if patching is blocked: Validate and allowlist image URLs in user input before passing to ChatOpenAI — reject non-HTTPS URLs and URLs resolving to RFC 1918/link-local ranges.
-
Cloud hardening: Enable IMDSv2 (hop limit = 1) on all EC2/GCE instances running LangChain to block metadata SSRF impact. Disable IMDSv1 explicitly.
-
Network controls: Restrict egress from LangChain application hosts to required destinations only; block 169.254.169.254, 100.100.100.200, and internal RFC 1918 ranges at the host firewall.
-
Detection: Log and alert on outbound HTTP requests from LLM application processes to non-approved destinations. Monitor for requests to metadata endpoints in application logs.
CISA SSVC Assessment
Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2026-26013?
LangChain SSRF in token counting for vision models allows unauthenticated attackers to trigger internal network requests by supplying malicious image URLs in multimodal inputs. CVSS 3.7 understates real-world risk: in cloud-hosted AI applications, SSRF reaches cloud metadata services (AWS IMDSv1, GCP), enabling credential theft beyond the stated availability-only impact. Patch to langchain-core >= 1.2.11 now; any public-facing LangChain app accepting vision inputs is exposed.
Is CVE-2026-26013 actively exploited?
No confirmed active exploitation of CVE-2026-26013 has been reported, but organizations should still patch proactively.
How to fix CVE-2026-26013?
1. Patch: Upgrade langchain-core to >= 1.2.11 immediately. Run `pip show langchain-core` to confirm version. 2. Workaround if patching is blocked: Validate and allowlist image URLs in user input before passing to ChatOpenAI — reject non-HTTPS URLs and URLs resolving to RFC 1918/link-local ranges. 3. Cloud hardening: Enable IMDSv2 (hop limit = 1) on all EC2/GCE instances running LangChain to block metadata SSRF impact. Disable IMDSv1 explicitly. 4. Network controls: Restrict egress from LangChain application hosts to required destinations only; block 169.254.169.254, 100.100.100.200, and internal RFC 1918 ranges at the host firewall. 5. Detection: Log and alert on outbound HTTP requests from LLM application processes to non-approved destinations. Monitor for requests to metadata endpoints in application logs.
What systems are affected by CVE-2026-26013?
This vulnerability affects the following AI/ML architecture patterns: Agent frameworks (LangChain-based), Vision/multimodal LLM pipelines, Multi-tenant SaaS LLM applications, RAG pipelines with image ingestion, Model serving with user-controlled multimodal inputs.
What is the CVSS score for CVE-2026-26013?
CVE-2026-26013 has a CVSS v3.1 base score of 3.7 (LOW). The EPSS exploitation probability is 0.02%.
Technical Details
NVD Description
LangChain is a framework for building agents and LLM-powered applications. Prior to 1.2.11, the ChatOpenAI.get_num_tokens_from_messages() method fetches arbitrary image_url values without validation when computing token counts for vision-enabled models. This allows attackers to trigger Server-Side Request Forgery (SSRF) attacks by providing malicious image URLs in user input. This vulnerability is fixed in 1.2.11.
Exploitation Scenario
Attacker submits a multimodal chat message to a public-facing LangChain application (e.g., a GPT-4o assistant accepting images): the message contains a `image_url` pointing to `http://169.254.169.254/latest/meta-data/iam/security-credentials/` (AWS IMDSv1). When the application calls `get_num_tokens_from_messages()` to estimate cost/context before the LLM call, LangChain fetches the URL server-side without validation. On an EC2 host with IMDSv1 enabled, the response returns IAM role credentials with full AWS access. The attacker never needs to interact with the LLM itself — the vulnerability fires in the preprocessing step, making it invisible to LLM-level input filtering.
Weaknesses (CWE)
CVSS Vector
CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:N/I:N/A:L References
- github.com/advisories/GHSA-2g6r-c272-w58r
- nvd.nist.gov/vuln/detail/CVE-2026-26013
- github.com/langchain-ai/langchain/commit/2b4b1dc29a833d4053deba4c2b77a3848c834565 Patch
- github.com/langchain-ai/langchain/releases/tag/langchain-core%3D%3D1.2.11 Product Release
- github.com/langchain-ai/langchain/security/advisories/GHSA-2g6r-c272-w58r Mitigation Vendor
Timeline
Related Vulnerabilities
CVE-2025-2828 10.0 LangChain RequestsToolkit: SSRF exposes cloud metadata
Same package: langchain CVE-2023-34541 9.8 LangChain: RCE via unsafe load_prompt deserialization
Same package: langchain CVE-2023-29374 9.8 LangChain: RCE via prompt injection in LLMMathChain
Same package: langchain CVE-2023-34540 9.8 LangChain: RCE via JiraAPIWrapper crafted input
Same package: langchain CVE-2023-36258 9.8 LangChain: unauthenticated RCE via code injection
Same package: langchain
AI Threat Alert