Resource-Intensive Queries
Adversaries may craft inputs specifically designed to increase the compute resources required for processing. For generative AI models, adversaries may use long input sequences, requests for extremely long outputs, or prompts that require complex reasoning as strategies for increasing compute costs [\[1\]][1]. For vision and language models, "sponge examples" [\[2\]][2] can be used to maximize energy consumption and decision latency. Utilizing fewer resource-intensive queries instead of simply flooding the model with excessive queries may be more difficult to detect and block or limit. [1]: https://genai.owasp.org/resource/owasp-top-10-for-llm-applications-2025/ [2]: https://arxiv.org/abs/2006.03463
| Severity | CVE | Headline | Package | CVSS |
|---|---|---|---|---|
| HIGH | CVE-2026-41680 | marked: infinite recursion DoS crashes Node.js via OOM | marked | 7.5 |
| HIGH | CVE-2026-44556 | open-webui: auth bypass allows unrestricted model access | open-webui | 7.1 |
| MEDIUM | CVE-2026-40115 | PraisonAI: unbounded body read enables local DoS | PraisonAI | 6.2 |
| HIGH | CVE-2026-33079 | mistune: ReDoS exposes Jupyter/AI services to DoS | mistune | — |
AI Threat Alert