Generate Malicious Commands
Adversaries may use large language models (LLMs) to dynamically generate malicious commands from natural language. Dynamically generated commands may be harder detect as the attack signature is constantly changing. AI-generated commands may also allow adversaries to more rapidly adapt to different environments and adjust their tactics. Adversaries may utilize LLMs present in the victim's environment or call out to externally hosted services. [APT28](https://attack.mitre.org/groups/G0007) utilized a model hosted on HuggingFace in a campaign with their LAMEHUG malware [\[1\]][1]. In either case prompts to generate malicious code can blend in with normal traffic. [1]: https://logpoint.com/en/blog/apt28s-new-arsenal-lamehug-the-first-ai-powered-malware
| Severity | CVE | Headline | Package | CVSS |
|---|---|---|---|---|
| CRITICAL | CVE-2026-41265 | Flowise: RCE via prompt injection in Airtable Agent | flowise | 9.8 |
| CRITICAL | CVE-2026-41264 | Flowise: prompt injection → unsandboxed RCE via CSV Agent | flowise-components | 9.8 |
| HIGH | CVE-2026-42079 | PPTAgent: eval injection enables RCE via LLM prompt injection | 8.6 | |
| HIGH | GHSA-f228-chmx-v6j6 | Flowise: prompt injection RCE via AirtableAgent | flowise-components | 8.3 |
| CRITICAL | GHSA-v38x-c887-992f | Flowise: prompt injection bypasses Python sandbox RCE | flowise-components | — |
| UNKNOWN | CVE-2026-33873 | Langflow: server-side RCE via LLM-generated code exec | langflow | — |
| UNKNOWN | CVE-2024-10950 | gpt_academic: RCE via unsandboxed prompt injection | gpt_academic | — |
| UNKNOWN | CVE-2024-48919 | Cursor IDE: prompt injection triggers terminal RCE | — |
AI Threat Alert