CVE-2026-33873: Langflow: server-side RCE via LLM-generated code exec

GHSA-v8hw-mh8c-jxfc UNKNOWN
Published March 27, 2026
CISO Take

Any user who can access the Agentic Assistant in Langflow <= 1.8.1 can achieve full server-side code execution by influencing what Python the LLM generates — no exploit tooling required, just a crafted prompt. Patch to 1.9.0 immediately or disable the Agentic Assistant feature. Treat any Langflow instance accessible to untrusted users as fully compromised until patched.

What is the risk?

HIGH risk despite absent CVSS. The impact ceiling is full server compromise (RCE), and the attack barrier is low: access to the Agentic Assistant plus a crafted prompt is sufficient. Langflow instances routinely hold high-value credentials (LLM API keys, database connections, cloud IAM tokens) in environment variables, making post-exploitation impact severe. Exposure is significant — Langflow is commonly deployed as an internal developer tool with broad internal access, and some instances are internet-facing.

What systems are affected?

Package Ecosystem Vulnerable Range Patched
langflow pip <= 1.8.1 1.9.0
147.9K Pushed 3d ago 31% patched ~53d to patch Full package profile →

Do you use langflow? You're affected.

Severity & Risk

CVSS 3.1
N/A
EPSS
0.0%
chance of exploitation in 30 days
Higher than 15% of all CVEs
Exploitation Status
No known exploitation
Sophistication
Trivial

What should I do?

6 steps
  1. PATCH

    Upgrade langflow to 1.9.0 via pip immediately.

  2. DISABLE

    If patching is not immediately feasible, restrict or disable access to the Agentic Assistant API endpoint (/api/v1/agentic or equivalent).

  3. SANDBOX

    Run Langflow in a container with no-root, read-only filesystem where possible, network egress restrictions, and secrets injected via environment — not hardcoded.

  4. RESTRICT ACCESS

    Enforce authentication and authorization on all Langflow endpoints; do not expose to the internet without strict ACLs.

  5. DETECT

    Alert on unexpected subprocess spawning, outbound network connections, and file writes from the Langflow process. Review logs for Agentic Assistant usage prior to patching — any such session should be treated as potentially malicious.

  6. ROTATE

    If exploitation cannot be ruled out, rotate all credentials accessible to the Langflow process (LLM API keys, DB passwords, cloud tokens).

CISA SSVC Assessment

Decision Track
Exploitation none
Automatable No
Technical Impact total

Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Art.15 - Accuracy, Robustness and Cybersecurity
ISO 42001
8.4 - AI System Operation and Monitoring
NIST AI RMF
MANAGE-2.2 - Mechanisms to sustain the value of deployed AI
OWASP LLM Top 10
LLM01:2025 - Prompt Injection LLM05:2025 - Improper Output Handling LLM06:2025 - Excessive Agency

Frequently Asked Questions

What is CVE-2026-33873?

Any user who can access the Agentic Assistant in Langflow <= 1.8.1 can achieve full server-side code execution by influencing what Python the LLM generates — no exploit tooling required, just a crafted prompt. Patch to 1.9.0 immediately or disable the Agentic Assistant feature. Treat any Langflow instance accessible to untrusted users as fully compromised until patched.

Is CVE-2026-33873 actively exploited?

No confirmed active exploitation of CVE-2026-33873 has been reported, but organizations should still patch proactively.

How to fix CVE-2026-33873?

1. PATCH: Upgrade langflow to 1.9.0 via pip immediately. 2. DISABLE: If patching is not immediately feasible, restrict or disable access to the Agentic Assistant API endpoint (/api/v1/agentic or equivalent). 3. SANDBOX: Run Langflow in a container with no-root, read-only filesystem where possible, network egress restrictions, and secrets injected via environment — not hardcoded. 4. RESTRICT ACCESS: Enforce authentication and authorization on all Langflow endpoints; do not expose to the internet without strict ACLs. 5. DETECT: Alert on unexpected subprocess spawning, outbound network connections, and file writes from the Langflow process. Review logs for Agentic Assistant usage prior to patching — any such session should be treated as potentially malicious. 6. ROTATE: If exploitation cannot be ruled out, rotate all credentials accessible to the Langflow process (LLM API keys, DB passwords, cloud tokens).

What systems are affected by CVE-2026-33873?

This vulnerability affects the following AI/ML architecture patterns: agent frameworks, LLM application development platforms, AI workflow automation, MLOps pipelines.

What is the CVSS score for CVE-2026-33873?

No CVSS score has been assigned yet.

Technical Details

NVD Description

Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assistant feature in Langflow executes LLM-generated Python code during its validation phase. Although this phase appears intended to validate generated component code, the implementation reaches dynamic execution sinks and instantiates the generated class server-side. In deployments where an attacker can access the Agentic Assistant feature and influence the model output, this can result in arbitrary server-side Python execution. Version 1.9.0 fixes the issue.

Exploitation Scenario

An attacker with access to a Langflow instance — an internal developer, a compromised account, or an unauthenticated user on an internet-exposed deployment — opens the Agentic Assistant and submits a prompt designed to steer the LLM toward generating a 'component' that embeds malicious Python: e.g., a reverse shell, credential dump from environment variables, or file exfiltration. Because Langflow's validation phase executes the generated code server-side and instantiates the generated class, the payload runs with the privileges of the Langflow process. The attacker exfiltrates all environment variables (LLM API keys, DB credentials, cloud tokens), establishes persistence, and pivots to connected infrastructure. No special tooling or AI/ML expertise is required — the LLM does the heavy lifting.

References

Timeline

Published
March 27, 2026
Last Modified
March 27, 2026
First Seen
March 27, 2026

Related Vulnerabilities