ATLAS Landscape
AML.T0080.000
Memory
Adversaries may manipulate the memory of a large language model (LLM) in order to persist changes to the LLM to future chat sessions. Memory is a common feature in LLMs that allows them to remember information across chat sessions by utilizing a user-specific database. Because the memory is controlled via normal conversations with the user (e.g. "remember my preference for ...") an adversary can inject memories via Direct or Indirect Prompt Injection. Memories may contain malicious instructions (e.g. instructions that leak private conversations) or may promote the adversary's hidden agenda (e.g. manipulating the user).
9 CVEs mapped
View on MITRE ATLAS →
| Severity | CVE | Headline | Package | CVSS |
|---|---|---|---|---|
| HIGH | CVE-2026-44843 | LangChain: deserialization poisons LLM chat history | langchain-core | 8.2 |
| MEDIUM | CVE-2026-28277 | langgraph: Deserialization enables RCE | langgraph | 6.8 |
| MEDIUM | CVE-2024-7041 | open-webui: IDOR enables cross-user memory tampering | open-webui | 6.5 |
| MEDIUM | CVE-2026-34451 | anthropic-ai/sdk: memory tool path traversal escape | @anthropic-ai/sdk | — |
| UNKNOWN | CVE-2026-41686 | @anthropic-ai/sdk: insecure file perms expose agent memory | @anthropic-ai/sdk | — |
| HIGH | GHSA-2r2p-4cgf-hv7h | engramx: CSRF injects persistent prompts into AI agents | — | |
| MEDIUM | GHSA-f934-5rqf-xx47 | OpenClaw: path traversal in memory_get reads arbitrary workspace files | openclaw | — |
| MEDIUM | CVE-2026-34452 | Anthropic SDK: TOCTOU symlink escape in async memory tool | anthropic | — |
| MEDIUM | CVE-2026-34450 | anthropic-sdk: insecure file perms expose agent memory | anthropic | — |
AI Threat Alert