Any user who can access the Agentic Assistant in Langflow <= 1.8.1 can achieve full server-side code execution by influencing what Python the LLM generates — no exploit tooling required, just a crafted prompt. Patch to 1.9.0 immediately or disable the Agentic Assistant feature. Treat any Langflow instance accessible to untrusted users as fully compromised until patched.
What is the risk?
HIGH risk despite absent CVSS. The impact ceiling is full server compromise (RCE), and the attack barrier is low: access to the Agentic Assistant plus a crafted prompt is sufficient. Langflow instances routinely hold high-value credentials (LLM API keys, database connections, cloud IAM tokens) in environment variables, making post-exploitation impact severe. Exposure is significant — Langflow is commonly deployed as an internal developer tool with broad internal access, and some instances are internet-facing.
What systems are affected?
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| langflow | pip | <= 1.8.1 | 1.9.0 |
Do you use langflow? You're affected.
Severity & Risk
What should I do?
6 steps-
PATCH
Upgrade langflow to 1.9.0 via pip immediately.
-
DISABLE
If patching is not immediately feasible, restrict or disable access to the Agentic Assistant API endpoint (/api/v1/agentic or equivalent).
-
SANDBOX
Run Langflow in a container with no-root, read-only filesystem where possible, network egress restrictions, and secrets injected via environment — not hardcoded.
-
RESTRICT ACCESS
Enforce authentication and authorization on all Langflow endpoints; do not expose to the internet without strict ACLs.
-
DETECT
Alert on unexpected subprocess spawning, outbound network connections, and file writes from the Langflow process. Review logs for Agentic Assistant usage prior to patching — any such session should be treated as potentially malicious.
-
ROTATE
If exploitation cannot be ruled out, rotate all credentials accessible to the Langflow process (LLM API keys, DB passwords, cloud tokens).
CISA SSVC Assessment
Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2026-33873?
Any user who can access the Agentic Assistant in Langflow <= 1.8.1 can achieve full server-side code execution by influencing what Python the LLM generates — no exploit tooling required, just a crafted prompt. Patch to 1.9.0 immediately or disable the Agentic Assistant feature. Treat any Langflow instance accessible to untrusted users as fully compromised until patched.
Is CVE-2026-33873 actively exploited?
No confirmed active exploitation of CVE-2026-33873 has been reported, but organizations should still patch proactively.
How to fix CVE-2026-33873?
1. PATCH: Upgrade langflow to 1.9.0 via pip immediately. 2. DISABLE: If patching is not immediately feasible, restrict or disable access to the Agentic Assistant API endpoint (/api/v1/agentic or equivalent). 3. SANDBOX: Run Langflow in a container with no-root, read-only filesystem where possible, network egress restrictions, and secrets injected via environment — not hardcoded. 4. RESTRICT ACCESS: Enforce authentication and authorization on all Langflow endpoints; do not expose to the internet without strict ACLs. 5. DETECT: Alert on unexpected subprocess spawning, outbound network connections, and file writes from the Langflow process. Review logs for Agentic Assistant usage prior to patching — any such session should be treated as potentially malicious. 6. ROTATE: If exploitation cannot be ruled out, rotate all credentials accessible to the Langflow process (LLM API keys, DB passwords, cloud tokens).
What systems are affected by CVE-2026-33873?
This vulnerability affects the following AI/ML architecture patterns: agent frameworks, LLM application development platforms, AI workflow automation, MLOps pipelines.
What is the CVSS score for CVE-2026-33873?
No CVSS score has been assigned yet.
Technical Details
NVD Description
Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assistant feature in Langflow executes LLM-generated Python code during its validation phase. Although this phase appears intended to validate generated component code, the implementation reaches dynamic execution sinks and instantiates the generated class server-side. In deployments where an attacker can access the Agentic Assistant feature and influence the model output, this can result in arbitrary server-side Python execution. Version 1.9.0 fixes the issue.
Exploitation Scenario
An attacker with access to a Langflow instance — an internal developer, a compromised account, or an unauthenticated user on an internet-exposed deployment — opens the Agentic Assistant and submits a prompt designed to steer the LLM toward generating a 'component' that embeds malicious Python: e.g., a reverse shell, credential dump from environment variables, or file exfiltration. Because Langflow's validation phase executes the generated code server-side and instantiates the generated class, the payload runs with the privileges of the Langflow process. The attacker exfiltrates all environment variables (LLM API keys, DB credentials, cloud tokens), establishes persistence, and pivots to connected infrastructure. No special tooling or AI/ML expertise is required — the LLM does the heavy lifting.
Weaknesses (CWE)
References
- github.com/advisories/GHSA-v8hw-mh8c-jxfc
- nvd.nist.gov/vuln/detail/CVE-2026-33873
- github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/agentic/api/router.py
- github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/agentic/api/schemas.py
- github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/agentic/helpers/code_extraction.py
- github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/agentic/helpers/validation.py
- github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/agentic/services/assistant_service.py
- github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/agentic/services/assistant_service.py
- github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/agentic/services/assistant_service.py
- github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/api/utils/core.py
- github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/api/v1/login.py
- github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/services/auth/utils.py
- github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/backend/base/langflow/services/auth/utils.py
- github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/lfx/src/lfx/custom/validate.py
- github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/lfx/src/lfx/custom/validate.py
- github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/lfx/src/lfx/custom/validate.py
- github.com/langflow-ai/langflow/blob/f7f4d1e70ba5eecd18162ec96f3571c2cfbcd1fc/src/lfx/src/lfx/services/settings/auth.py
- github.com/langflow-ai/langflow/security/advisories/GHSA-v8hw-mh8c-jxfc
Timeline
Related Vulnerabilities
CVE-2026-33309 9.9 langflow: Path Traversal enables file access
Same package: langflow CVE-2024-37014 9.8 Langflow: unauthenticated RCE via custom component API
Same package: langflow CVE-2026-27966 9.8 langflow: Code Injection enables RCE
Same package: langflow CVE-2026-33017 9.8 langflow: Code Injection enables RCE
Same package: langflow CVE-2024-42835 9.8 Langflow: Unauthenticated RCE via PythonCodeTool
Same package: langflow