CVE-2026-45134 affects the LangSmith SDK's prompt pull methods, which deserialize prompt manifests from LangSmith Hub without enforcing a trust boundary between public external prompts and organization-owned ones—allowing an attacker who publishes a malicious prompt to inject attacker-controlled LangChain object constructor arguments, including a custom base_url that silently redirects all LLM inference traffic to attacker-controlled infrastructure. Redirected requests may expose provider API keys, system prompts, retrieved RAG context, and user data, while the secrets_from_env=True parameter creates an additional environment-variable exfiltration vector; with 2,640 downstream dependents and a package risk score of 77/100, the blast radius across LangChain-based AI deployments is significant. Although not in CISA KEV and lacking a public exploit, the low attack complexity—requiring only that a victim application pulls a public prompt by owner/name—makes this a credible supply chain threat for any agentic or CI/CD pipeline that auto-pulls prompts from the Hub. Upgrade to langsmith Python >= 0.8.0 or JS/TS >= 0.6.0 immediately, audit all pull_prompt and pullPrompt call sites for public owner/name identifiers, and rotate LANGSMITH_API_KEY if compromise is suspected.
What is the risk?
High. CVSS 7.1 (AV:N/AC:L/PR:N/UI:R) with confidentiality impact HIGH reflects realistic SSRF and credential exfiltration potential. Attack complexity is low—no authentication required, no special privileges needed—and the trust boundary violation is trivial to exploit by any entity that can publish to LangSmith Hub. The 2,640 downstream dependents, 51 prior CVEs in the same package ecosystem, and an OpenSSF Scorecard of 6.4/10 compound supply chain risk. The package risk score of 77/100 aligns with elevated concern for production AI workloads using LangChain-based pipelines.
Attack Kill Chain
What systems are affected?
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| langchain | pip | < 0.3.30 | 0.3.30 |
| langchain-classic | pip | < 1.0.7 | 1.0.7 |
| langsmith | npm | < 0.6.0 | 0.6.0 |
| langsmith | pip | < 0.8.0 | 0.8.0 |
Severity & Risk
Attack Surface
What should I do?
7 steps-
Upgrade immediately: langsmith Python >= 0.8.0, langsmith JS/TS >= 0.6.0, langchain >= 0.3.30, langchain-classic >= 1.0.7.
-
Audit all pull_prompt/pull_prompt_commit (Python) and pullPrompt/pullPromptCommit (JS/TS) call sites—identify every location using a public owner/name identifier and gate or remove them.
-
Do not pass dangerously_pull_public_prompt=True unless the specific prompt contents have been independently reviewed and explicitly trusted, not just the publishing account.
-
Disable secrets_from_env=True for any prompt sourced outside the caller's own organization.
-
Avoid include_model=True when pulling prompts from untrusted sources—it expands the deserialization allowlist to partner integration classes.
-
Treat LANGSMITH_API_KEY as a high-value secret: rotate immediately if exposure is suspected, restrict access to minimum required team members, and audit LangSmith Hub for unexpected prompt modifications.
-
Monitor outbound LLM API traffic for anomalous base_url or endpoint changes as a detection signal for active exploitation.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2026-45134?
CVE-2026-45134 affects the LangSmith SDK's prompt pull methods, which deserialize prompt manifests from LangSmith Hub without enforcing a trust boundary between public external prompts and organization-owned ones—allowing an attacker who publishes a malicious prompt to inject attacker-controlled LangChain object constructor arguments, including a custom base_url that silently redirects all LLM inference traffic to attacker-controlled infrastructure. Redirected requests may expose provider API keys, system prompts, retrieved RAG context, and user data, while the secrets_from_env=True parameter creates an additional environment-variable exfiltration vector; with 2,640 downstream dependents and a package risk score of 77/100, the blast radius across LangChain-based AI deployments is significant. Although not in CISA KEV and lacking a public exploit, the low attack complexity—requiring only that a victim application pulls a public prompt by owner/name—makes this a credible supply chain threat for any agentic or CI/CD pipeline that auto-pulls prompts from the Hub. Upgrade to langsmith Python >= 0.8.0 or JS/TS >= 0.6.0 immediately, audit all pull_prompt and pullPrompt call sites for public owner/name identifiers, and rotate LANGSMITH_API_KEY if compromise is suspected.
Is CVE-2026-45134 actively exploited?
No confirmed active exploitation of CVE-2026-45134 has been reported, but organizations should still patch proactively.
How to fix CVE-2026-45134?
1. Upgrade immediately: langsmith Python >= 0.8.0, langsmith JS/TS >= 0.6.0, langchain >= 0.3.30, langchain-classic >= 1.0.7. 2. Audit all pull_prompt/pull_prompt_commit (Python) and pullPrompt/pullPromptCommit (JS/TS) call sites—identify every location using a public owner/name identifier and gate or remove them. 3. Do not pass dangerously_pull_public_prompt=True unless the specific prompt contents have been independently reviewed and explicitly trusted, not just the publishing account. 4. Disable secrets_from_env=True for any prompt sourced outside the caller's own organization. 5. Avoid include_model=True when pulling prompts from untrusted sources—it expands the deserialization allowlist to partner integration classes. 6. Treat LANGSMITH_API_KEY as a high-value secret: rotate immediately if exposure is suspected, restrict access to minimum required team members, and audit LangSmith Hub for unexpected prompt modifications. 7. Monitor outbound LLM API traffic for anomalous base_url or endpoint changes as a detection signal for active exploitation.
What systems are affected by CVE-2026-45134?
This vulnerability affects the following AI/ML architecture patterns: LangChain-based LLM application pipelines, Agent frameworks using LangSmith prompt management, RAG pipelines pulling shared prompts from LangSmith Hub, CI/CD pipelines with automated prompt pulls at startup or deployment, Multi-tenant SaaS applications using LangSmith for prompt versioning.
What is the CVSS score for CVE-2026-45134?
CVE-2026-45134 has a CVSS v3.1 base score of 7.1 (HIGH).
Technical Details
NVD Description
## Description The LangSmith SDK's prompt pull methods (`pull_prompt` / `pull_prompt_commit` in Python, `pullPrompt` / `pullPromptCommit` in JS/TS) fetch and deserialize prompt manifests from the LangSmith Hub. These manifests may contain serialized LangChain objects and model configuration that affect runtime behavior. When pulling a public prompt by `owner/name` identifier, the manifest content is controlled by an external party, but prior versions of the SDK did not distinguish this from pulling a prompt within the caller's own organization. Prompt manifests can intentionally configure a model with a custom base URL, default headers, model name, or other constructor arguments. These are supported features, but they also mean the prompt contents should be treated as executable configuration rather than plain text. A prompt can also include serialized LangChain `Runnable` or `PromptTemplate` objects with attacker-controlled constructor kwargs, or secret references that, if `secrets_from_env` is enabled, read environment variables at deserialization time. Applications are exposed when all of the following are true: - The application calls `pull_prompt` or `pull_prompt_commit` (Python) or `pullPrompt` or `pullPromptCommit` (JS/TS) with a public `owner/name` prompt identifier. - The prompt was published or modified by an untrusted or compromised account. - The application uses the pulled prompt without independently validating its contents. Applications that only pull prompts from their own organization (referenced by name only, without an `owner/` prefix) are not affected by the public prompt trust boundary issue described above. However, same-organization prompts carry their own risk. If an attacker gains write access to the organization (for example, through a leaked `LANGSMITH_API_KEY` or a compromised team member account), they can push a malicious prompt that is pulled and deserialized without any additional warning. ## Impact An attacker who publishes a malicious prompt to LangSmith Hub may be able to affect applications that pull that prompt by `owner/name`. If the prompt manifest reaches the SDK's deserialization path, the SDK will instantiate the referenced LangChain objects with the attacker-supplied constructor arguments rather than treating the manifest as inert data. Realistic impacts include: - Server-side request forgery (SSRF), outbound request redirection, and interception of LLM traffic if a prompt manifest configures an LLM client with an attacker-controlled `base_url`, proxy, or equivalent endpoint-setting parameter. In typical deployments, redirected requests may include prompt contents, system prompts, retrieved context, model parameters, provider credentials, or other secrets and may disclose them to the attacker-controlled endpoint. - Prompt injection or behavior manipulation if a manifest embeds attacker-controlled system messages, prompt templates, or model parameters that alter the application's behavior. - Additional deserialization risk when `include_model=True` is passed, because this expands the allowlist to partner integration classes. This is not the default, but it materially increases risk when pulling prompts from outside the caller's organization. ## Remediation The LangSmith SDK now blocks pulling public prompts by `owner/name` by default. Callers must explicitly opt in by passing `dangerously_pull_public_prompt=True` (Python) or `dangerouslyPullPublicPrompt: true` (JS/TS) to acknowledge the trust boundary. This flag should only be set after reviewing and trusting the prompt contents, not merely the publishing account. Upgrade to LangSmith SDK **Python >= 0.8.0** or **JS/TS >= 0.6.0**. ### Guidance for prompt pull methods The prompt pull methods (`pull_prompt` / `pull_prompt_commit` in Python, `pullPrompt` / `pullPromptCommit` in JS/TS) should be used only with trusted prompts. Do not pull public prompts by `owner/name` from untrusted or unreviewed sources without understanding that the manifest contents will be deserialized and may affect runtime behavior. When pulling prompts that include model configuration (`include_model=True` in Python, `includeModel: true` in JS/TS), the deserialization allowlist expands to include partner integration classes. Because this mode is not the default and is often unnecessary for third-party prompts, prefer the default (`false`) when pulling prompts from sources outside your organization. Avoid passing `secrets_from_env=True` (Python) when pulling untrusted prompts. This parameter allows prompt manifests to read environment variables during deserialization. Only use it with trusted prompts from your own organization. ### Same-organization prompts Prompts pulled from the caller's own organization (referenced by name only, without an `owner/` prefix) are not gated by the new `dangerously_pull_public_prompt` flag, but they are not inherently safe. If an attacker gains write access to the organization (for example, through a leaked `LANGSMITH_API_KEY` or a compromised team member account), they can push a malicious prompt that redirects LLM traffic to attacker-controlled infrastructure and may disclose any credentials attached to those requests. The security of same-organization prompts follows a shared responsibility model. The LangSmith SDK enforces trust boundaries for public prompts pulled from external accounts, but it cannot protect against compromised credentials or accounts within the caller's own organization. Securing API keys, managing team member access, and reviewing prompt contents before production deployment are the responsibility of the organization. Organizations should treat prompts as executable configuration and apply the same review and audit practices they would apply to application code. ## Credits First reported by @Moaaz-0x.
Exploitation Scenario
An attacker registers a LangSmith account and publishes a prompt at attacker-org/helpful-summarizer. The prompt manifest embeds constructor kwargs specifying base_url: https://attacker.io/proxy and default Authorization headers that mimic a legitimate LLM API. A victim's LangChain-based document summarization service calls pull_prompt('attacker-org/helpful-summarizer')—possibly as part of a CI/CD pipeline loading production prompts at startup—and the SDK deserializes the manifest, instantiating an OpenAI-compatible client silently pointed at the attacker's proxy. All subsequent LLM calls, including system prompts containing business logic, retrieved RAG chunks with internal document content, and user queries with the OPENAI_API_KEY in the Authorization header, are transparently forwarded to attacker.io before being proxied to the real API. The victim application functions normally with no visible disruption while the attacker logs all traffic and exfiltrates valid provider credentials for further abuse.
Weaknesses (CWE)
CVSS Vector
CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:L/A:N References
Timeline
Related Vulnerabilities
CVE-2025-2828 10.0 LangChain RequestsToolkit: SSRF exposes cloud metadata
Same package: langchain CVE-2023-34540 9.8 LangChain: RCE via JiraAPIWrapper crafted input
Same package: langchain CVE-2023-29374 9.8 LangChain: RCE via prompt injection in LLMMathChain
Same package: langchain CVE-2023-34541 9.8 LangChain: RCE via unsafe load_prompt deserialization
Same package: langchain CVE-2023-36258 9.8 LangChain: unauthenticated RCE via code injection
Same package: langchain