CVE-2026-33873

GHSA-v8hw-mh8c-jxfc UNKNOWN

Langflow: server-side RCE via LLM-generated code exec

Published March 27, 2026
CISO Take

Any user who can access the Agentic Assistant in Langflow <= 1.8.1 can achieve full server-side code execution by influencing what Python the LLM generates — no exploit tooling required, just a crafted prompt. Patch to 1.9.0 immediately or disable the Agentic Assistant feature. Treat any Langflow instance accessible to untrusted users as fully compromised until patched.

Affected Systems

Package Ecosystem Vulnerable Range Patched
langflow pip <= 1.8.1 1.9.0

Do you use langflow? You're affected.

Severity & Risk

CVSS 3.1
N/A
EPSS
N/A
KEV Status
Not in KEV
Sophistication
Trivial

Recommended Action

  1. 1. PATCH: Upgrade langflow to 1.9.0 via pip immediately. 2. DISABLE: If patching is not immediately feasible, restrict or disable access to the Agentic Assistant API endpoint (/api/v1/agentic or equivalent). 3. SANDBOX: Run Langflow in a container with no-root, read-only filesystem where possible, network egress restrictions, and secrets injected via environment — not hardcoded. 4. RESTRICT ACCESS: Enforce authentication and authorization on all Langflow endpoints; do not expose to the internet without strict ACLs. 5. DETECT: Alert on unexpected subprocess spawning, outbound network connections, and file writes from the Langflow process. Review logs for Agentic Assistant usage prior to patching — any such session should be treated as potentially malicious. 6. ROTATE: If exploitation cannot be ruled out, rotate all credentials accessible to the Langflow process (LLM API keys, DB passwords, cloud tokens).

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Art.15 - Accuracy, Robustness and Cybersecurity
ISO 42001
8.4 - AI System Operation and Monitoring
NIST AI RMF
MANAGE-2.2 - Mechanisms to sustain the value of deployed AI
OWASP LLM Top 10
LLM01:2025 - Prompt Injection LLM05:2025 - Improper Output Handling LLM06:2025 - Excessive Agency

Technical Details

NVD Description

Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assistant feature in Langflow executes LLM-generated Python code during its validation phase. Although this phase appears intended to validate generated component code, the implementation reaches dynamic execution sinks and instantiates the generated class server-side. In deployments where an attacker can access the Agentic Assistant feature and influence the model output, this can result in arbitrary server-side Python execution. Version 1.9.0 fixes the issue.

Exploitation Scenario

An attacker with access to a Langflow instance — an internal developer, a compromised account, or an unauthenticated user on an internet-exposed deployment — opens the Agentic Assistant and submits a prompt designed to steer the LLM toward generating a 'component' that embeds malicious Python: e.g., a reverse shell, credential dump from environment variables, or file exfiltration. Because Langflow's validation phase executes the generated code server-side and instantiates the generated class, the payload runs with the privileges of the Langflow process. The attacker exfiltrates all environment variables (LLM API keys, DB credentials, cloud tokens), establishes persistence, and pivots to connected infrastructure. No special tooling or AI/ML expertise is required — the LLM does the heavy lifting.

References

Timeline

Published
March 27, 2026
Last Modified
March 27, 2026
First Seen
March 27, 2026