GHSA-r7w7-9xr2-qq2r: langchain-openai: SSRF DNS rebinding, blind network probe
GHSA-r7w7-9xr2-qq2r LOWA DNS rebinding flaw in langchain-openai's image URL handling allows an attacker-controlled hostname to pass SSRF validation but then resolve to an internal IP at fetch time, enabling blind probing of internal network topology from within the application server's network context. With 2,703 downstream dependents, aggregate exposure across the LangChain ecosystem is notable despite the low CVSS score of 3.1; however, high attack complexity and required user interaction keep active exploitation unlikely — the vulnerability is absent from CISA KEV and has no public exploit or scanner template. The practical blast radius is constrained because the fetched response is consumed only by Pillow for image dimension extraction and never returned to the caller, ruling out direct data exfiltration and limiting impact to blind internal reconnaissance. Teams using multimodal LLM vision features via langchain-openai should upgrade to version 1.1.14 or later, which also requires langchain-core >= 1.2.31.
Risk Assessment
Low exploitability due to high attack complexity (attacker must control a domain and operate a fast-rebinding DNS server within a narrow timing window) and required user interaction (victim application must invoke image token counting with an attacker-supplied URL). No data exfiltration path exists — blind network probing is the ceiling of impact. Wide deployment surface via 2,703 dependents increases aggregate likelihood across the ecosystem despite per-instance difficulty. Not in CISA KEV; no public exploit or Nuclei scanner template available.
Attack Kill Chain
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| langchain-openai | pip | < 1.1.14 | 1.1.14 |
Do you use langchain-openai? You're affected.
Severity & Risk
Attack Surface
Recommended Action
4 steps-
Upgrade langchain-openai to >= 1.1.14 (also requires langchain-core >= 1.2.31, which provides the SSRFSafeSyncTransport primitive).
-
If immediate patching is not feasible, restrict accepted image URLs to a strict allowlist of trusted CDN domains at the application layer before they reach langchain.
-
As network-layer defense-in-depth, block outbound HTTP from LLM application servers to RFC 1918 private ranges, loopback, and 169.254.169.254 (cloud metadata).
-
Monitor for anomalous outbound connections from LLM application servers to internal IP ranges as an indicator of exploitation attempts.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is GHSA-r7w7-9xr2-qq2r?
A DNS rebinding flaw in langchain-openai's image URL handling allows an attacker-controlled hostname to pass SSRF validation but then resolve to an internal IP at fetch time, enabling blind probing of internal network topology from within the application server's network context. With 2,703 downstream dependents, aggregate exposure across the LangChain ecosystem is notable despite the low CVSS score of 3.1; however, high attack complexity and required user interaction keep active exploitation unlikely — the vulnerability is absent from CISA KEV and has no public exploit or scanner template. The practical blast radius is constrained because the fetched response is consumed only by Pillow for image dimension extraction and never returned to the caller, ruling out direct data exfiltration and limiting impact to blind internal reconnaissance. Teams using multimodal LLM vision features via langchain-openai should upgrade to version 1.1.14 or later, which also requires langchain-core >= 1.2.31.
Is GHSA-r7w7-9xr2-qq2r actively exploited?
No confirmed active exploitation of GHSA-r7w7-9xr2-qq2r has been reported, but organizations should still patch proactively.
How to fix GHSA-r7w7-9xr2-qq2r?
1. Upgrade langchain-openai to >= 1.1.14 (also requires langchain-core >= 1.2.31, which provides the SSRFSafeSyncTransport primitive). 2. If immediate patching is not feasible, restrict accepted image URLs to a strict allowlist of trusted CDN domains at the application layer before they reach langchain. 3. As network-layer defense-in-depth, block outbound HTTP from LLM application servers to RFC 1918 private ranges, loopback, and 169.254.169.254 (cloud metadata). 4. Monitor for anomalous outbound connections from LLM application servers to internal IP ranges as an indicator of exploitation attempts.
What systems are affected by GHSA-r7w7-9xr2-qq2r?
This vulnerability affects the following AI/ML architecture patterns: Multimodal and vision LLM pipelines, LLM API integrations, Agent frameworks with image processing.
What is the CVSS score for GHSA-r7w7-9xr2-qq2r?
GHSA-r7w7-9xr2-qq2r has a CVSS v3.1 base score of 3.1 (LOW).
Technical Details
NVD Description
## Summary `langchain-openai`'s `_url_to_size()` helper (used by `get_num_tokens_from_messages` for image token counting) validated URLs for SSRF protection and then fetched them in a separate network operation with independent DNS resolution. This left a TOCTOU / DNS rebinding window: an attacker-controlled hostname could resolve to a public IP during validation and then to a private/localhost IP during the actual fetch. The practical impact is limited because the fetched response body is passed directly to Pillow's `Image.open()` to extract dimensions — the response content is never returned, logged, or otherwise exposed to the caller. An attacker cannot exfiltrate data from internal services through this path. A potential risk is blind probing (inferring whether an internal host/port is open based on timing or error behavior). ## Affected versions - `langchain-openai` < 1.1.14 ## Patched versions - `langchain-openai` >= 1.1.14 (requires `langchain-core` >= 1.2.31) ## Affected code **File:** `libs/partners/openai/langchain_openai/chat_models/base.py` — `_url_to_size()` The vulnerable pattern was a validate-then-fetch with separate DNS resolution: ```python validate_safe_url(image_source, allow_private=False, allow_http=True) # ... separate network operation with independent DNS resolution ... response = httpx.get(image_source, timeout=timeout) ``` ## Fix The fix replaces the validate-then-fetch pattern with an SSRF-safe httpx transport (`SSRFSafeSyncTransport` from `langchain-core`) that: - Resolves DNS once and validates all returned IPs against a policy (private ranges, cloud metadata, localhost, k8s internal DNS) - Pins the connection to the validated IP, eliminating the DNS rebinding window - Disables redirect following to prevent redirect-based SSRF bypasses This fix was released in langchain-openai 1.1.14.
Exploitation Scenario
An adversary targeting an internal service — such as the AWS metadata endpoint at 169.254.169.254 or a Kubernetes internal service — registers an attacker-controlled domain with a very short TTL. They submit a multimodal chat request to a GPT-4V-enabled chatbot built on langchain-openai, embedding their domain as the image URL. During SSRF validation, the domain resolves to a benign public IP and passes the policy check. The attacker's authoritative DNS server then immediately rebinds the domain to the internal target IP. When langchain-openai performs its independent HTTP fetch to calculate image tokens, it resolves DNS again and now connects to the internal host. By observing connection timing and error types across repeated requests to different ports, the attacker maps reachable internal hosts and services without any payload data leaving the target environment.
Weaknesses (CWE)
CVSS Vector
CVSS:3.1/AV:N/AC:H/PR:N/UI:R/S:U/C:L/I:N/A:N References
Timeline
Related Vulnerabilities
CVE-2025-61260 9.8 OpenAI Codex CLI: RCE via malicious MCP config files
Same package: openai GHSA-gqqj-85qm-8qhf 8.7 paperclipai: connector trust bypass enables Gmail read/write
Same package: openai GHSA-w8hx-hqjv-vjcq 7.3 Paperclip: RCE via workspace runtime command injection
Same package: openai CVE-2026-39411 5.0 LobeChat: auth bypass via forged XOR obfuscated header
Same package: openai CVE-2025-53767 10.0 Azure OpenAI: SSRF EoP, no auth required (CVSS 10)
Same attack type: Data Extraction
AI Threat Alert