ATLAS Landscape
AML.T0056

Extract LLM System Prompt

Adversaries may attempt to extract a large language model's (LLM) system prompt. This can be done via prompt injection to induce the model to reveal its own system prompt or may be extracted from a configuration file. System prompts can be a portion of an AI provider's competitive advantage and are thus valuable intellectual property that may be targeted by adversaries.

Severity CVE CVSS
HIGH CVE-2026-27001 7.8
MEDIUM GHSA-766v-q9x3-g744 6.5
MEDIUM CVE-2026-44563 5.4
MEDIUM CVE-2025-63390 5.3
MEDIUM CVE-2026-40151 5.3
MEDIUM CVE-2024-7045 4.3
MEDIUM CVE-2025-60511 4.3