CVE-2026-40157: PraisonAI: path traversal allows arbitrary file write via recipe unpack

GHSA-99g3-w8gr-x37c CRITICAL
Published April 10, 2026
CISO Take

PraisonAI's recipe unpack command extracts .praison tar archives without any path validation, allowing a malicious bundle to write files anywhere the executing user has filesystem permissions. Any developer or team consuming shared .praison bundles is exposed — an attacker needs only to distribute a crafted bundle and trick a user into running `praisonai recipe unpack`, after which overwriting .bashrc, SSH authorized_keys, or cron files is trivial. Exploitation requires no specialized skill: the published PoC is 20 lines of Python, and the attack surface is recipe repositories, tutorials, and colleague-shared bundles. Critically, a safe extraction function already existed in the same codebase and was used by two other commands — this is a clear developer oversight, not an architectural gap. Upgrade to PraisonAI >= 4.5.128 immediately and audit any .praison bundles previously unpacked from untrusted sources.

Sources: NVD GitHub Advisory ATLAS

Risk Assessment

Critical. The exploitation bar is minimal — no AI or ML knowledge required, just a crafted .gz file and a social engineering pretext. The blast radius scales with filesystem permissions: in a developer workstation context this typically means persistent code execution via shell config injection, SSH key insertion, or credential file overwrite. PraisonAI has 29 other CVEs in the same package, indicating a pattern of insufficient input validation. No EPSS data or CISA KEV listing is available, but the trivially low exploitation complexity offsets the lack of confirmed in-the-wild activity. AI agent developers running shared recipe workflows in team or CI/CD environments face the highest exposure.

Attack Kill Chain

Artifact Delivery
Attacker crafts a malicious .praison bundle containing path traversal entries (e.g., ../../.bashrc) and distributes it via a public recipe repository, tutorial link, or direct colleague share.
AML.T0011.000
User Execution
Victim runs `praisonai recipe unpack malicious.praison -o ./recipes`, invoking the vulnerable cmd_unpack function which iterates tar members and calls raw tar.extract() without any path validation.
AML.T0011
Path Traversal Write
Python's tarfile.extract() resolves the traversal path and writes attacker-controlled content to target files (e.g., .bashrc, authorized_keys) outside the intended output directory.
Persistence and Impact
Overwritten shell configs or SSH authorized_keys execute attacker payload on next shell initialization, enabling API key exfiltration, persistent backdoor access, or full compromise of the AI development environment.
AML.T0112

Affected Systems

Package Ecosystem Vulnerable Range Patched
PraisonAI pip >= 2.7.2, < 4.5.128 4.5.128

Do you use PraisonAI? You're affected.

Severity & Risk

CVSS 3.1
N/A
EPSS
N/A
Exploitation Status
No known exploitation
Sophistication
Trivial

Recommended Action

  1. Upgrade to PraisonAI >= 4.5.128 immediately — the patch replaces raw tar.extract() in cmd_unpack with the existing _safe_extractall() function that rejects absolute paths, '..' segments, and resolved paths outside the destination.
  2. If patching is not immediately possible, disable or restrict use of `praisonai recipe unpack` with bundles sourced outside your direct control.
  3. Audit previously unpacked .praison bundles: inspect the output directory and its parent directories for unexpected or recently modified files (especially .bashrc, .zshrc, .profile, .ssh/authorized_keys, crontabs).
  4. Implement file integrity monitoring (FIM) on shell config files and SSH directories; alert on writes from praisonai-related processes.
  5. Treat all .praison bundles from external sources as untrusted input — apply the same scrutiny as arbitrary code execution artifacts.

Classification

Compliance Impact

This CVE is relevant to:

ISO 42001
A.6.2.6 - AI System Security
NIST AI RMF
MANAGE 2.2 - Risk Treatment
OWASP LLM Top 10
LLM05:2025 - Supply Chain Vulnerabilities

Frequently Asked Questions

What is CVE-2026-40157?

PraisonAI's recipe unpack command extracts .praison tar archives without any path validation, allowing a malicious bundle to write files anywhere the executing user has filesystem permissions. Any developer or team consuming shared .praison bundles is exposed — an attacker needs only to distribute a crafted bundle and trick a user into running `praisonai recipe unpack`, after which overwriting .bashrc, SSH authorized_keys, or cron files is trivial. Exploitation requires no specialized skill: the published PoC is 20 lines of Python, and the attack surface is recipe repositories, tutorials, and colleague-shared bundles. Critically, a safe extraction function already existed in the same codebase and was used by two other commands — this is a clear developer oversight, not an architectural gap. Upgrade to PraisonAI >= 4.5.128 immediately and audit any .praison bundles previously unpacked from untrusted sources.

Is CVE-2026-40157 actively exploited?

No confirmed active exploitation of CVE-2026-40157 has been reported, but organizations should still patch proactively.

How to fix CVE-2026-40157?

1. Upgrade to PraisonAI >= 4.5.128 immediately — the patch replaces raw tar.extract() in cmd_unpack with the existing _safe_extractall() function that rejects absolute paths, '..' segments, and resolved paths outside the destination. 2. If patching is not immediately possible, disable or restrict use of `praisonai recipe unpack` with bundles sourced outside your direct control. 3. Audit previously unpacked .praison bundles: inspect the output directory and its parent directories for unexpected or recently modified files (especially .bashrc, .zshrc, .profile, .ssh/authorized_keys, crontabs). 4. Implement file integrity monitoring (FIM) on shell config files and SSH directories; alert on writes from praisonai-related processes. 5. Treat all .praison bundles from external sources as untrusted input — apply the same scrutiny as arbitrary code execution artifacts.

What systems are affected by CVE-2026-40157?

This vulnerability affects the following AI/ML architecture patterns: agent frameworks, AI development pipelines, model serving environments.

What is the CVSS score for CVE-2026-40157?

No CVSS score has been assigned yet.

Technical Details

NVD Description

| Field | Value | |---|---| | Severity | Critical | | Type | Path traversal -- arbitrary file write via `tar.extract()` without member validation | | Affected | `src/praisonai/praisonai/cli/features/recipe.py:1170-1172` | ## Summary `cmd_unpack` in the recipe CLI extracts `.praison` tar archives using raw `tar.extract()` without validating archive member paths. A `.praison` bundle containing `../../` entries will write files outside the intended output directory. An attacker who distributes a malicious bundle can overwrite arbitrary files on the victim's filesystem when they run `praisonai recipe unpack`. ## Details The vulnerable code is in `cli/features/recipe.py:1170-1172`: ```python for member in tar.getmembers(): if member.name != "manifest.json": tar.extract(member, recipe_dir) ``` The only check is whether the member is `manifest.json`. The code never validates member names -- absolute paths, `..` components, and symlinks all pass through. Python's `tarfile.extract()` resolves these relative to the destination, so a member named `../../.bashrc` lands two directories above `recipe_dir`. The codebase does contain a safe extraction function (`_safe_extractall` in `recipe/registry.py:131-162`) that rejects absolute paths, `..` segments, and resolved paths outside the destination. It is used by the `pull` and `publish` paths, but `cmd_unpack` does not call it. ```python # recipe/registry.py:141-159 -- safe version exists but is not used by cmd_unpack def _safe_extractall(tar: tarfile.TarFile, dest_dir: Path) -> None: dest = str(dest_dir.resolve()) for member in tar.getmembers(): if os.path.isabs(member.name): raise RegistryError(...) if ".." in member.name.split("/"): raise RegistryError(...) resolved = os.path.realpath(os.path.join(dest, member.name)) if not resolved.startswith(dest + os.sep): raise RegistryError(...) tar.extractall(dest_dir) ``` ## PoC Build a malicious bundle: ```python import tarfile, io, json manifest = json.dumps({"name": "legit-recipe", "version": "1.0.0"}).encode() with tarfile.open("malicious.praison", "w:gz") as tar: info = tarfile.TarInfo(name="manifest.json") info.size = len(manifest) tar.addfile(info, io.BytesIO(manifest)) payload = b"export EVIL=1 # injected by malicious recipe\n" evil = tarfile.TarInfo(name="../../.bashrc") evil.size = len(payload) tar.addfile(evil, io.BytesIO(payload)) ``` Trigger: ```bash praisonai recipe unpack malicious.praison -o ./recipes # Expected: files written only under ./recipes/legit-recipe/ # Actual: .bashrc written two directories above the output dir ``` ## Impact | Path | Traversal blocked? | |------|--------------------| | `praisonai recipe pull <name>` | Yes -- uses `_safe_extractall` | | `praisonai recipe publish <bundle>` | Yes -- uses `_safe_extractall` | | `praisonai recipe unpack <bundle>` | No -- raw `tar.extract()` | An attacker needs to get a victim to unpack a malicious `.praison` bundle -- say, through a shared recipe repository, a link in a tutorial, or by sending it to a colleague directly. Depending on filesystem permissions, an attacker can overwrite shell config files (`.bashrc`, `.zshrc`), cron entries, SSH `authorized_keys`, or project files in parent directories. The attacker controls both the path and the content of every written file. ## Remediation Replace the raw extraction loop with `_safe_extractall`: ```python # cli/features/recipe.py:1170-1172 # Before: for member in tar.getmembers(): if member.name != "manifest.json": tar.extract(member, recipe_dir) # After: from praisonai.recipe.registry import _safe_extractall _safe_extractall(tar, recipe_dir) ``` ### Affected paths - `src/praisonai/praisonai/cli/features/recipe.py:1170-1172` -- `cmd_unpack` extracts tar members without path validation

Exploitation Scenario

An attacker publishes a legitimate-looking PraisonAI recipe bundle to a public registry or embeds a malicious download link in a popular AI agent tutorial. The bundle contains a valid manifest.json alongside a second member with a traversal path such as ../../.bashrc. When a developer downloads the bundle and runs `praisonai recipe unpack agent-bundle.praison -o ./recipes`, Python's tarfile.extract() resolves the traversal and writes attacker-controlled shell code into the developer's .bashrc. The next time the developer opens a terminal, the payload executes silently — exfiltrating Anthropic or OpenAI API keys stored in environment variables, injecting a reverse shell callback, or adding a persistent SSH backdoor into the AI development environment.

Timeline

Published
April 10, 2026
Last Modified
April 10, 2026
First Seen
April 10, 2026

Related Vulnerabilities