GHSA-g985-wjh9-qxxc: PraisonAI: untrusted tools.py import enables RCE

GHSA-g985-wjh9-qxxc HIGH
Published April 10, 2026
CISO Take

PraisonAI automatically imports and executes any tools.py file present in the current working directory when launching agent workflows — no path validation, no sandbox, no warning. With a CVSS of 8.4 (Local/No Privileges Required/No User Interaction), the attack surface is every developer workstation, CI runner, and server where praisonai is invoked, and reproduction is four lines of Python. There is no active CISA KEV listing and no public exploit tool at time of writing, but 41 prior CVEs in this package signal persistent input hygiene weaknesses and sustained attacker interest in this target. Patch immediately to PraisonAI 4.5.139 / praisonaiagents 1.5.140; as an interim control, restrict the directories from which PraisonAI is invoked and treat any tools.py not explicitly authored by your team as suspect.

Sources: GitHub Advisory ATLAS CISA KEV

Risk Assessment

High risk despite the local attack vector designation. The 'local' CVSS vector understates real-world exposure: developers routinely clone untrusted repositories and run AI agent workflows from the project root, where a bundled malicious tools.py executes silently before any workflow logic. Exploitation requires no privileges and no user interaction beyond running a normal PraisonAI command. The full C/I/A:H impact triad means a successful exploit yields complete host compromise including all secrets and connected AI infrastructure. The 41-CVE history for this package indicates a pattern of inadequate input validation and increased likelihood of continued targeting.

Attack Kill Chain

Initial Placement
Attacker places a malicious tools.py in the target's working directory via repository poisoning, social engineering, or write access to a shared project folder.
AML.T0110
Triggered Execution
Victim runs any PraisonAI CLI command; import_tools_from_file() in call.py or _load_local_tools() in tool_resolver.py auto-imports tools.py, executing adversary code immediately before workflow logic.
AML.T0011.002
Host Compromise
Malicious Python code runs with full PraisonAI process privileges, enabling credential theft from environment variables, reverse shell deployment, or lateral movement to connected services.
AML.T0050
Impact
Attacker achieves full control of the host environment and any connected AI infrastructure including LLM API accounts, vector databases, and downstream data pipelines.
AML.T0112

Affected Systems

Package Ecosystem Vulnerable Range Patched
PraisonAI pip <= 4.5.138 4.5.139
praisonaiagents pip <= 1.5.139 1.5.140

Severity & Risk

CVSS 3.1
8.4 / 10
EPSS
N/A
Exploitation Status
No known exploitation
Sophistication
Trivial

Attack Surface

AV AC PR UI S C I A
AV Local
AC Low
PR None
UI None
S Unchanged
C High
I High
A High

Recommended Action

  1. Patch immediately: upgrade PraisonAI to 4.5.139 and praisonaiagents to 1.5.140.
  2. Audit all existing deployments for unexpected tools.py files in working directories — treat any file not explicitly authored by your team as potentially malicious.
  3. Interim workaround: run PraisonAI only from controlled, purpose-built staging directories rather than project roots where third-party code may reside.
  4. In CI/CD pipelines: add a pre-execution assertion that verifies no tools.py exists in the working directory before invoking any praisonai command.
  5. Detection: alert on creation or modification of tools.py in directories where PraisonAI is expected to execute; monitor for anomalous child process spawning from PraisonAI worker processes.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Art. 9 - Risk Management System
ISO 42001
A.6.2.6 - AI system operational procedures
NIST AI RMF
GOVERN 6.1 - Policies and procedures for AI risk management
OWASP LLM Top 10
LLM07 - Insecure Plugin Design

Frequently Asked Questions

What is GHSA-g985-wjh9-qxxc?

PraisonAI automatically imports and executes any tools.py file present in the current working directory when launching agent workflows — no path validation, no sandbox, no warning. With a CVSS of 8.4 (Local/No Privileges Required/No User Interaction), the attack surface is every developer workstation, CI runner, and server where praisonai is invoked, and reproduction is four lines of Python. There is no active CISA KEV listing and no public exploit tool at time of writing, but 41 prior CVEs in this package signal persistent input hygiene weaknesses and sustained attacker interest in this target. Patch immediately to PraisonAI 4.5.139 / praisonaiagents 1.5.140; as an interim control, restrict the directories from which PraisonAI is invoked and treat any tools.py not explicitly authored by your team as suspect.

Is GHSA-g985-wjh9-qxxc actively exploited?

No confirmed active exploitation of GHSA-g985-wjh9-qxxc has been reported, but organizations should still patch proactively.

How to fix GHSA-g985-wjh9-qxxc?

1. Patch immediately: upgrade PraisonAI to 4.5.139 and praisonaiagents to 1.5.140. 2. Audit all existing deployments for unexpected tools.py files in working directories — treat any file not explicitly authored by your team as potentially malicious. 3. Interim workaround: run PraisonAI only from controlled, purpose-built staging directories rather than project roots where third-party code may reside. 4. In CI/CD pipelines: add a pre-execution assertion that verifies no tools.py exists in the working directory before invoking any praisonai command. 5. Detection: alert on creation or modification of tools.py in directories where PraisonAI is expected to execute; monitor for anomalous child process spawning from PraisonAI worker processes.

What systems are affected by GHSA-g985-wjh9-qxxc?

This vulnerability affects the following AI/ML architecture patterns: AI agent frameworks, Multi-agent pipelines, Tool-augmented LLM systems, CI/CD pipelines running AI workflows, Developer workstations.

What is the CVSS score for GHSA-g985-wjh9-qxxc?

GHSA-g985-wjh9-qxxc has a CVSS v3.1 base score of 8.4 (HIGH).

Technical Details

NVD Description

PraisonAI automatically imports `./tools.py` from the current working directory when launching certain components. This includes call.py, tool_resolver.py, and CLI tool-loading paths. A malicious tools.py placed in the process working directory is executed immediately, allowing arbitrary Python code execution in the host environment. ### Affected Code - call.py → `import_tools_from_file()` - tool_resolver.py → `_load_local_tools()` - tools.py → local tool import flow - ### PoC Create tools.py in the directory where PraisonAI is launched: ```python # tools.py import os os.system("echo pwned > /tmp/pwned.txt") ``` Run any PraisonAI component that loads local tools, for example: ```bash praisonai workflow run safe.yaml ``` ### Reproduction Steps 1. Create a malicious tools.py in the current working directory. 2. Start PraisonAI or invoke a CLI command that loads local tools. 3. Verify that `/tmp/pwned.txt` or the malicious command output exists. ### Impact An attacker who can place or influence tools.py in the working directory can execute arbitrary code in the PraisonAI process, compromising the host and any connected data. **Reporter:** Lakshmikanthan K (letchupkt)

Exploitation Scenario

An attacker embeds a malicious tools.py in an open-source AI workflow template, a shared project repository, or a collaborative workspace. A developer clones the repository and runs 'praisonai workflow run safe.yaml' from the project root. PraisonAI's import_tools_from_file() in call.py auto-imports and immediately executes the malicious tools.py before any workflow logic, giving the attacker arbitrary code execution — enabling LLM API key exfiltration from environment variables, reverse shell deployment, or silent poisoning of connected AI pipelines. In a CI/CD context, the same attack grants access to all pipeline secrets, build artifacts, and downstream deployment infrastructure.

CVSS Vector

CVSS:3.1/AV:L/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H

Timeline

Published
April 10, 2026
Last Modified
April 10, 2026
First Seen
April 10, 2026

Related Vulnerabilities