CVE-2024-58340: langchain: security flaw enables exploitation

HIGH PoC AVAILABLE CISA: TRACK*
Published January 12, 2026
CISO Take

Any LangChain-based application running MRKL agents on version 0.3.1 or earlier is vulnerable to a DoS attack delivered via prompt injection — no authentication required. An attacker who can influence LLM output (e.g., through user-supplied prompts in a downstream app) can stall your agent service with a single crafted string. Patch to LangChain >0.3.1 immediately; if you cannot patch today, wrap MRKLOutputParser calls with a timeout and sanitize LLM output before parsing.

Risk Assessment

CVSS 7.5 High with AV:N/AC:L/PR:N/UI:N is accurate for the worst case. Real-world exploitability requires the attacker to first achieve prompt injection against the target application — an increasingly realistic assumption for any public-facing LLM app. The exploit chain is two steps: (1) inject a crafted payload via user input, (2) the LLM reflects it and the app feeds it to the vulnerable parser. CPU exhaustion is the outcome, not code execution or data leakage, which limits blast radius. Organizations running MRKL/ReAct agent architectures at scale or in multi-tenant SaaS products face the highest exposure.

Affected Systems

Package Ecosystem Vulnerable Range Patched
langchain pip No patch
135.7K OpenSSF 6.5 2.6K dependents Pushed 7d ago 17% patched ~256d to patch Full package profile →

Do you use langchain? You're affected.

Severity & Risk

CVSS 3.1
7.5 / 10
EPSS
0.1%
chance of exploitation in 30 days
Higher than 24% of all CVEs
Exploitation Status
Exploit Available
Exploitation: MEDIUM
Sophistication
Moderate
Exploitation Confidence
medium
CISA SSVC: Public PoC
Public PoC indexed (trickest/cve)
Composite signal derived from CISA KEV, CISA SSVC, EPSS, trickest/cve, and Nuclei templates.

Attack Surface

AV AC PR UI S C I A
AV Network
AC Low
PR None
UI None
S Unchanged
C None
I None
A High

Recommended Action

6 steps
  1. PATCH

    Upgrade langchain to the first version past 0.3.1 that includes the fixed regex; verify with pip show langchain.

  2. WORKAROUND (if patch is not immediate): Wrap MRKLOutputParser.parse() calls with a signal-based or thread-based timeout (e.g., 2–5 seconds); raise a parsing error and abort on timeout.

  3. INPUT HYGIENE

    Truncate LLM output to a reasonable maximum length (e.g., 4 KB) before passing to the parser; reject outputs with suspicious repetitive patterns.

  4. RATE LIMITING

    Apply per-user/session rate limits on agent invocations to reduce DoS throughput.

  5. DETECTION

    Alert on sustained high CPU usage in agent worker processes; log parsing duration and alert on outliers >500 ms.

  6. INVENTORY

    Audit all internal and customer-facing apps that import langchain.agents.mrkl.

CISA SSVC Assessment

Decision Track*
Exploitation poc
Automatable Yes
Technical Impact partial

Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Art. 15 - Accuracy, robustness and cybersecurity Article 15 - Accuracy, Robustness, and Cybersecurity
ISO 42001
A.6.1.4 - AI Risk Assessment A.6.2.6 - AI system availability and resilience A.9.2 - AI Incident Handling A.9.3 - AI system risk treatment
NIST AI RMF
MANAGE-2.2 - Mechanisms to sustain the deployed AI system MAP-5.1 - Likelihood and Impact of AI Risks
OWASP LLM Top 10
LLM01:2025 - Prompt Injection LLM07:2025 - System Prompt Leakage / Insecure Output Handling LLM10:2025 - Unbounded Consumption

Frequently Asked Questions

What is CVE-2024-58340?

Any LangChain-based application running MRKL agents on version 0.3.1 or earlier is vulnerable to a DoS attack delivered via prompt injection — no authentication required. An attacker who can influence LLM output (e.g., through user-supplied prompts in a downstream app) can stall your agent service with a single crafted string. Patch to LangChain >0.3.1 immediately; if you cannot patch today, wrap MRKLOutputParser calls with a timeout and sanitize LLM output before parsing.

Is CVE-2024-58340 actively exploited?

Proof-of-concept exploit code is publicly available for CVE-2024-58340, increasing the risk of exploitation.

How to fix CVE-2024-58340?

1. PATCH: Upgrade langchain to the first version past 0.3.1 that includes the fixed regex; verify with `pip show langchain`. 2. WORKAROUND (if patch is not immediate): Wrap MRKLOutputParser.parse() calls with a signal-based or thread-based timeout (e.g., 2–5 seconds); raise a parsing error and abort on timeout. 3. INPUT HYGIENE: Truncate LLM output to a reasonable maximum length (e.g., 4 KB) before passing to the parser; reject outputs with suspicious repetitive patterns. 4. RATE LIMITING: Apply per-user/session rate limits on agent invocations to reduce DoS throughput. 5. DETECTION: Alert on sustained high CPU usage in agent worker processes; log parsing duration and alert on outliers >500 ms. 6. INVENTORY: Audit all internal and customer-facing apps that import langchain.agents.mrkl.

What systems are affected by CVE-2024-58340?

This vulnerability affects the following AI/ML architecture patterns: agent frameworks, LLM agentic pipelines, ReAct/MRKL agent workflows, multi-tenant LLM SaaS.

What is the CVSS score for CVE-2024-58340?

CVE-2024-58340 has a CVSS v3.1 base score of 7.5 (HIGH). The EPSS exploitation probability is 0.08%.

Technical Details

NVD Description

LangChain versions up to and including 0.3.1 contain a regular expression denial-of-service (ReDoS) vulnerability in the MRKLOutputParser.parse() method (libs/langchain/langchain/agents/mrkl/output_parser.py). The parser applies a backtracking-prone regular expression when extracting tool actions from model output. An attacker who can supply or influence the parsed text (for example via prompt injection in downstream applications that pass LLM output directly into MRKLOutputParser.parse()) can trigger excessive CPU consumption by providing a crafted payload, causing significant parsing delays and a denial-of-service condition.

Exploitation Scenario

An attacker targets a public-facing AI assistant built on LangChain MRKL agents. They craft a user prompt designed to cause the underlying LLM to produce output containing a pathological string — for example, a long sequence of spaces or repeated characters that exploits the backtracking in the MRKL action-extraction regex (e.g., `Action: ` followed by thousands of repeated ambiguous characters). The application passes the LLM's raw output directly to MRKLOutputParser.parse() without sanitization. The regex engine enters catastrophic backtracking, pegging one CPU core at 100% for tens of seconds per request. An attacker automating dozens of such requests can exhaust worker threads and render the service unavailable for all users within minutes.

Weaknesses (CWE)

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H

Timeline

Published
January 12, 2026
Last Modified
January 21, 2026
First Seen
January 12, 2026

Related Vulnerabilities