Flowise: Airtable_Agent Code Injection Remote Code Execution Vulnerability
PraisonAI Vulnerable to OS Command Injection
husky/gpt_academic version <= 3.83, the plugin `CodeInterpreter` is vulnerable to code injection caused by prompt injection. The root cause is the execution of user-provided prompts that generate untrusted code
Flowise: Remote code execution vulnerability in AirtableAgent.ts caused by lack
enabled, channel metadata (topic/description) can be incorporated into the model's system prompt. Prompt injection is a documented risk for LLM-driven systems. This issue increases the injection surface
Prompt injection vulnerability in 1millionbot Millie chatbot that occurs when a user manages to evade chat restrictions using Boolean prompt injection techniques (formulating a question in such a way that
JSONalyzeQueryEngine` in the run-llama/llama_index repository allows for SQL injection via prompt injection. This can lead to arbitrary file creation and Denial-of-Service (DoS) attacks. The vulnerability affects
GraphCypherQAChain class of langchain-ai/langchain version 0.2.5 allows for SQL injection through prompt injection. This vulnerability can lead to unauthorized data manipulation, data exfiltration, denial of service
server CORS wildcard + auth-off-by-default enables CSRF graph exfiltration and persistent indirect prompt injection
Flowise: APIChain Prompt Injection SSRF in GET/POST API Chains
langchain-ai/langchainjs versions 0.2.5 and all versions with this class allows for prompt injection, leading to SQL injection. This vulnerability permits unauthorized data manipulation, data exfiltration, denial of service
PandasAI uses an interactive prompt function that is vulnerable to prompt injection and run arbitrary Python code that can lead to Remote Code Execution (RCE) instead of the intended explanation
PraisonAIAgents: Environment Variable Secret Exfiltration via os.path.expandvars() Bypassing shell=False
requires critical-level approval, read_skill_file has neither protection. An agent influenced by prompt injection can exfiltrate sensitive files without triggering any approval prompt
PraisonAIAgents: Path Traversal via Unvalidated Glob Pattern in list_files
from the lack of proper sandboxing when evaluating an LLM generated python script. Using prompt injection techniques, an unauthenticated attacker with the ability to send prompts to a chatflow using
@mobilenext/mobile-mcp: Arbitrary Android Intent Execution via mobile_open_url
Langchain through 0.0.155, prompt injection allows an attacker to force the service to retrieve data from an arbitrary URL, essentially providing SSRF and potentially injecting content into downstream tasks
Flowise: CSV Agent Prompt Injection Remote Code Execution Vulnerability
LangChain through 0.0.131, the LLMMathChain chain allows prompt injection attacks that can execute arbitrary code via the Python exec method
AI Threat Alert