LLM Prompt Injection
An adversary may craft malicious prompts as inputs to an LLM that cause the LLM to act in unintended ways. These "prompt injections" are often designed to cause the model to ignore aspects of its original instructions and follow the adversary's instructions instead. Prompt Injections can be an initial access vector to the LLM that provides the adversary with a foothold to carry out other steps in their operation. They may be designed to bypass defenses in the LLM, or allow the adversary to issue privileged commands. The effects of a prompt injection can persist throughout an interactive session with an LLM. Malicious prompts may be injected directly by the adversary ([Direct](/techniques/AML.T0051.000)) either to leverage the LLM to generate harmful content or to gain a foothold on the system and lead to further effects. Prompts may also be injected indirectly when as part of its normal operation the LLM ingests the malicious prompt from another data source ([Indirect](/techniques/AML.T0051.001)). This type of injection can be used by the adversary to a foothold on the system or to target the user of the LLM. Malicious prompts may also be [Triggered](/techniques/AML.T0051.002) user actions or system events.
| Severity | CVE | Headline | Package | CVSS |
|---|---|---|---|---|
| CRITICAL | CVE-2026-27966 | langflow: Code Injection enables RCE | langflow | 9.8 |
| CRITICAL | CVE-2026-2654 | smolagents: SSRF allows internal network access | smolagents | 9.8 |
| CRITICAL | CVE-2025-9556 | langchaingo: Jinja2 SSTI allows host filesystem read | 9.8 | |
| CRITICAL | CVE-2024-8309 | LangChain GraphCypher: prompt injection enables DB wipe | langchain | 9.8 |
| CRITICAL | CVE-2024-7042 | LangChainJS: prompt injection enables full graph DB takeover | langchain | 9.8 |
| CRITICAL | CVE-2024-12366 | PandasAI: prompt injection enables unauthenticated RCE | 9.8 | |
| CRITICAL | CVE-2023-44467 | LangChain: RCE bypass via __import__ in PAL chain | langchain_experimental | 9.8 |
| CRITICAL | CVE-2023-39659 | LangChain: RCE via unsanitized PythonAstREPL input | langchain | 9.8 |
| CRITICAL | CVE-2023-38860 | LangChain: RCE via unsanitized prompt parameter | langchain | 9.8 |
| CRITICAL | CVE-2026-30741 | OpenClaw: RCE via request-side prompt injection | openclaw | 9.8 |
| HIGH | CVE-2025-30358 | Mesop: class pollution enables DoS and LLM jailbreak | 8.1 | |
| HIGH | CVE-2024-38459 | LangChain: Python REPL code execution without opt-in | langchain-experimental | 7.8 |
| HIGH | CVE-2026-27001 | OpenClaw: prompt injection via unsanitized workspace path | openclaw | 7.8 |
| HIGH | CVE-2024-58340 | langchain: security flaw enables exploitation | langchain | 7.5 |
| HIGH | CVE-2024-12911 | llama-index: SQLi+DoS via prompt injection in query engine | llamaindex | 7.1 |
| HIGH | CVE-2025-5018 | Hive Support WP: OpenAI key theft + prompt hijack | 7.1 | |
| MEDIUM | CVE-2025-68949 | n8n: security flaw enables exploitation | n8n | 5.3 |
| UNKNOWN | CVE-2024-10950 | gpt_academic: RCE via unsandboxed prompt injection | gpt_academic | — |
| HIGH | GHSA-gfmx-pph7-g46x | openclaw: trust boundary bypass enables prompt injection | openclaw | — |
| HIGH | GHSA-jf56-mccx-5f3f | OpenClaw: wake hook trust violation elevates to System prompt | openclaw | — |
| UNKNOWN | CVE-2026-2275 | CrewAI: RCE via Docker fallback in CodeInterpreter | — |
AI Threat Alert