CVE-2025-68664

GHSA-c67j-w6g6-q2cm HIGH
Published December 23, 2025
CISO Take

If your LangChain applications pass user-controlled data into LangChain's native dumps()/dumpd() serialization functions, patch to langchain-core 0.3.81+ or 1.2.5+ immediately. An unauthenticated remote attacker can inject crafted 'lc' key structures that get deserialized as legitimate LangChain objects, bypassing the untrusted-data boundary and enabling confidentiality breaches or integrity manipulation. LangChain's ubiquity across agentic and RAG architectures makes blast radius organization-wide.

Affected Systems

Package Ecosystem Vulnerable Range Patched
langchain-core pip >= 1.0.0, < 1.2.5 1.2.5
langchain_core pip No patch
langchain_core pip No patch

Severity & Risk

CVSS 3.1
8.2 / 10
EPSS
0.0%
chance of exploitation in 30 days
KEV Status
Not in KEV
Sophistication
Moderate

Recommended Action

  1. 1) PATCH NOW: Upgrade langchain-core to >= 0.3.81 (0.x branch) or >= 1.2.5 (1.x branch). Verify with 'pip show langchain-core'. 2) WORKAROUND (if patching blocked): Reject or sanitize any user-controlled input containing top-level 'lc' keys before it reaches dumps()/dumpd(). Treat 'lc' as a reserved key in all input validation schemas. 3) CODE AUDIT: Grep codebase for 'dumps(' and 'dumpd(' calls and trace data provenance—flag any path where external/user data reaches these functions without sanitization. 4) DETECTION: Monitor for unexpected deserialization errors, unusual LangChain class instantiation in application logs, and anomalous data access patterns post-deserialization. 5) CONTAINER/CI: Rebuild any Docker images pinned to vulnerable langchain-core versions and update dependency lock files.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Article 15 - Accuracy, robustness and cybersecurity
ISO 42001
A.6.1.2 - AI supply chain management A.6.1.5 - AI risk assessment A.8.4 - Information security controls for AI systems A.9.2 - Information security risk treatment
NIST AI RMF
GOVERN 1.1 - Policies and processes for AI risk management MANAGE 2.2 - Mechanisms are in place and applied to respond to, recover from, and communicate about AI risks
OWASP LLM Top 10
LLM02:2025 - Insecure Output Handling LLM05 - Supply Chain Vulnerabilities LLM05:2025 - Supply Chain Vulnerabilities

Technical Details

NVD Description

LangChain is a framework for building agents and LLM-powered applications. Prior to versions 0.3.81 and 1.2.5, a serialization injection vulnerability exists in LangChain's dumps() and dumpd() functions. The functions do not escape dictionaries with 'lc' keys when serializing free-form dictionaries. The 'lc' key is used internally by LangChain to mark serialized objects. When user-controlled data contains this key structure, it is treated as a legitimate LangChain object during deserialization rather than plain user data. This issue has been patched in versions 0.3.81 and 1.2.5.

Exploitation Scenario

An adversary targeting a LangChain-backed API that persists user session state sends a crafted JSON body containing a dictionary with 'lc' key structure mimicking LangChain's internal serialization schema—e.g., {'lc': 1, 'type': 'constructor', 'id': ['langchain_core', 'runnables', 'RunnableLambda'], 'kwargs': {<malicious_payload>}}. The application serializes this via dumps() and later deserializes it. LangChain's deserializer treats the crafted structure as a legitimate LangChain object and instantiates it, executing attacker-controlled logic in the application process context. No credentials, no prior access, no user interaction required—any network-reachable input path touching the serialization layer is attack surface.

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:L/A:N

Timeline

Published
December 23, 2025
Last Modified
January 13, 2026
First Seen
December 23, 2025