ATLAS Landscape
AML.T0056
Extract LLM System Prompt
Adversaries may attempt to extract a large language model's (LLM) system prompt. This can be done via prompt injection to induce the model to reveal its own system prompt or may be extracted from a configuration file. System prompts can be a portion of an AI provider's competitive advantage and are thus valuable intellectual property that may be targeted by adversaries.
7 CVEs mapped
View on MITRE ATLAS →
| Severity | CVE | Headline | Package | CVSS |
|---|---|---|---|---|
| HIGH | CVE-2026-27001 | OpenClaw: prompt injection via unsanitized workspace path | openclaw | 7.8 |
| MEDIUM | GHSA-766v-q9x3-g744 | praisonaiagents: agent context leak + path traversal | praisonaiagents | 6.5 |
| MEDIUM | CVE-2026-44563 | open-webui: auth bypass exposes restricted LLM models | open-webui | 5.4 |
| MEDIUM | CVE-2025-63390 | anythingllm: Missing Auth allows unauthenticated access | 5.3 | |
| MEDIUM | CVE-2026-40151 | PraisonAI: unauthenticated agent config and system prompt disclosure | PraisonAI | 5.3 |
| MEDIUM | CVE-2024-7045 | open-webui: missing authz exposes admin prompts | open-webui | 4.3 |
| MEDIUM | CVE-2025-60511 | Moodle: IDOR enables unauthorized data access | 4.3 |
AI Threat Alert