LangChain SSRF in token counting for vision models allows unauthenticated attackers to trigger internal network requests by supplying malicious image URLs in multimodal inputs. CVSS 3.7 understates real-world risk: in cloud-hosted AI applications, SSRF reaches cloud metadata services (AWS IMDSv1, GCP), enabling credential theft beyond the stated availability-only impact. Patch to langchain-core >= 1.2.11 now; any public-facing LangChain app accepting vision inputs is exposed.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| langchain-core | pip | < 1.2.11 | 1.2.11 |
| langchain_core | pip | — | No patch |
Severity & Risk
Recommended Action
- 1. Patch: Upgrade langchain-core to >= 1.2.11 immediately. Run `pip show langchain-core` to confirm version. 2. Workaround if patching is blocked: Validate and allowlist image URLs in user input before passing to ChatOpenAI — reject non-HTTPS URLs and URLs resolving to RFC 1918/link-local ranges. 3. Cloud hardening: Enable IMDSv2 (hop limit = 1) on all EC2/GCE instances running LangChain to block metadata SSRF impact. Disable IMDSv1 explicitly. 4. Network controls: Restrict egress from LangChain application hosts to required destinations only; block 169.254.169.254, 100.100.100.200, and internal RFC 1918 ranges at the host firewall. 5. Detection: Log and alert on outbound HTTP requests from LLM application processes to non-approved destinations. Monitor for requests to metadata endpoints in application logs.
Classification
Compliance Impact
This CVE is relevant to:
Technical Details
NVD Description
LangChain is a framework for building agents and LLM-powered applications. Prior to 1.2.11, the ChatOpenAI.get_num_tokens_from_messages() method fetches arbitrary image_url values without validation when computing token counts for vision-enabled models. This allows attackers to trigger Server-Side Request Forgery (SSRF) attacks by providing malicious image URLs in user input. This vulnerability is fixed in 1.2.11.
Exploitation Scenario
Attacker submits a multimodal chat message to a public-facing LangChain application (e.g., a GPT-4o assistant accepting images): the message contains a `image_url` pointing to `http://169.254.169.254/latest/meta-data/iam/security-credentials/` (AWS IMDSv1). When the application calls `get_num_tokens_from_messages()` to estimate cost/context before the LLM call, LangChain fetches the URL server-side without validation. On an EC2 host with IMDSv1 enabled, the response returns IAM role credentials with full AWS access. The attacker never needs to interact with the LLM itself — the vulnerability fires in the preprocessing step, making it invisible to LLM-level input filtering.
Weaknesses (CWE)
CVSS Vector
CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:N/I:N/A:L References
- github.com/advisories/GHSA-2g6r-c272-w58r
- github.com/langchain-ai/langchain/commit/2b4b1dc29a833d4053deba4c2b77a3848c834565
- github.com/langchain-ai/langchain/releases/tag/langchain-core%3D%3D1.2.11
- github.com/langchain-ai/langchain/security/advisories/GHSA-2g6r-c272-w58r
- nvd.nist.gov/vuln/detail/CVE-2026-26013
- github.com/langchain-ai/langchain/commit/2b4b1dc29a833d4053deba4c2b77a3848c834565 Patch
- github.com/langchain-ai/langchain/releases/tag/langchain-core%3D%3D1.2.11 Product Release
- github.com/langchain-ai/langchain/security/advisories/GHSA-2g6r-c272-w58r Mitigation Vendor