CVE-2026-34452: Anthropic SDK: TOCTOU symlink escape in async memory tool

GHSA-w828-4qhx-vxx3 MEDIUM
Published March 31, 2026
CISO Take

The async filesystem memory tool in anthropic Python SDK 0.86.x allows a local attacker to escape the memory sandbox via a symlink swap between path validation and file use — a classic TOCTOU race. Upgrade to 0.87.0 immediately; if you cannot patch, switch to the synchronous memory tool (unaffected) as a stopgap. Blast radius is limited to local attackers with write access to the memory directory, but in shared or containerized agent environments this is a realistic threat.

What is the risk?

Medium risk overall, but elevated in multi-tenant or containerized AI agent deployments where filesystem isolation is the primary control. Local exploitation requires write access to the memory directory — an attacker already partially in the environment. EPSS of 0.00016 reflects minimal observed exploitation activity. The vulnerability's constraint (requires local write access) prevents mass exploitation, but in AI agent architectures where the memory directory is a shared resource or accessible via agent tool invocation, the attack surface widens considerably.

What systems are affected?

Package Ecosystem Vulnerable Range Patched
anthropic pip >= 0.86.0, < 0.87.0 0.87.0
3.4K 4.8K dependents Pushed 7d ago 80% patched ~1d to patch Full package profile →

Do you use anthropic? You're affected.

Severity & Risk

CVSS 3.1
N/A
EPSS
0.0%
chance of exploitation in 30 days
Higher than 4% of all CVEs
Exploitation Status
No known exploitation
Sophistication
Moderate

What should I do?

6 steps
  1. Patch immediately: upgrade anthropic Python SDK to 0.87.0 (pip install anthropic==0.87.0).

  2. If patching is not immediately possible, switch from the async memory tool to the synchronous implementation — it is not vulnerable.

  3. Restrict filesystem permissions on the memory directory: ensure only the agent process user can write to it, preventing symlink planting by other local users.

  4. In containerized environments, enforce read-only mounts outside the memory directory and use user namespaces to reduce cross-process write access.

  5. Audit logs for unexpected file access patterns outside the memory sandbox directory.

  6. Scan your dependency lock files for anthropic >= 0.86.0 and < 0.87.0 across all services.

CISA SSVC Assessment

Decision Track
Exploitation none
Automatable No
Technical Impact partial

Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Art. 9 - Risk management system
ISO 42001
A.6.2.3 - AI system security by design
NIST AI RMF
MANAGE 2.2 - Treatments, responses, and recovery plans for AI risks
OWASP LLM Top 10
LLM03 - Supply Chain LLM06 - Excessive Agency

Related AI Incidents (17)

Claude Code Agent Reportedly Deleted DataTalks.Club Production Infrastructure, Database, and Snapshots via Terraform
Feb 2026 Alexey Grigorev high confidence

Package "anthropic" mentioned in incident; Company "Anthropic" in CVE description

AIID #1424
Anthropic Said DeepSeek, Moonshot, and MiniMax Used Fraudulent Accounts and Proxies to Illicitly Distill Claude Capabilities at Scale
Feb 2026 Proxy reseller services, Moonshot AI, MiniMax, DeepSeek high confidence

Package "anthropic" mentioned in incident; Company "Anthropic" in CVE description

AIID #1395
Anthropic Claude AI Agent Reportedly Caused Financial Losses While Operating Office Vending Machine at Wall Street Journal Headquarters
Dec 2025 The Wall Street Journal, Andon Labs high confidence

Package "anthropic" mentioned in incident; Company "Anthropic" in CVE description

AIID #1313
Anthropic's Claude Was Reportedly Jailbroken To Allegedly Help Steal Sensitive Mexican Government Data
Dec 2025 Unknown hacker high confidence

Package "anthropic" mentioned in incident; Company "Anthropic" in CVE description

AIID #1430
Chinese State-Linked Operator (GTG-1002) Reportedly Uses Claude Code for Autonomous Cyber Espionage
Nov 2025 Unknown Chinese state-sponsored entity, State-linked operator using autonomous AI-enabled intrusion workflows, GTG-1002 high confidence

Package "anthropic" mentioned in incident; Company "Anthropic" in CVE description

AIID #1263
Anthropic Reportedly Identifies AI Misuse in Extortion Campaigns, North Korean IT Schemes, and Ransomware Sales
Aug 2025 Unknown cybercriminals, Ransomware-as-a-service actors, North Korean IT operatives, Government of North Korea high confidence

Package "anthropic" mentioned in incident; Company "Anthropic" in CVE description

AIID #1201
Malicious Nx npm Packages Reportedly Weaponize AI Coding Agents for Data Exfiltration
Aug 2025 Malicious actors compromising Nx’s CI/CD pipeline and publishing tainted npm packages high confidence

Package "anthropic" mentioned in incident; Company "Anthropic" in CVE description

AIID #1210
Reported Public Exposure of Over 100,000 LLM Conversations via Share Links Indexed by Search Engines and Archived
Jul 2025 xAI, OpenAI, Mistral, Microsoft, Anthropic, Alibaba high confidence

Package "anthropic" mentioned in incident; Company "Anthropic" in CVE description; Company "Anthropic" in CVE description

AIID #1186
Citation Errors in Concord Music v. Anthropic Attributed to Claude AI Use by Defense Counsel
May 2025 Anthropic high confidence

Package "anthropic" mentioned in incident; Company "Anthropic" in CVE description; Company "Anthropic" in CVE description

AIID #1074
Anthropic Report Details Claude Misuse for Influence Operations, Credential Stuffing, Recruitment Fraud, and Malware Development
Apr 2025 Unknown malicious actors, Unknown cybercriminals, Influence-as-a-service operators high confidence

Package "anthropic" mentioned in incident; Company "Anthropic" in CVE description

AIID #1054
Reported Emergence of 'Vegetative Electron Microscopy' in Scientific Papers Traced to Purported AI Training Data Contamination
Apr 2025 Scientific authors, Researchers, OpenAI, Anthropic high confidence

Package "anthropic" mentioned in incident; Company "Anthropic" in CVE description; Company "Anthropic" in CVE description

AIID #1044
Multiple LLMs Allegedly Endorsed Suicide as a Viable Option During Non-Adversarial Mental Health Venting Session
Apr 2025 OpenAI, DeepSeek AI, Anthropic high confidence

Package "anthropic" mentioned in incident; Company "Anthropic" in CVE description; Company "Anthropic" in CVE description

AIID #1026
At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors
Mar 2025 Unidentified online communities using chatbots, Spicy Chat, JanitorAI, CrushOn.AI, Chub AI, Character.AI high confidence

Package "anthropic" mentioned in incident; Company "Anthropic" in CVE description

AIID #975
AI Models Reportedly Found to Provide Misinformation on Election Processes in Spanish
Oct 2024 OpenAI, Mistral, Meta, Google, Anthropic high confidence

Package "anthropic" mentioned in incident; Company "Anthropic" in CVE description; Company "Anthropic" in CVE description

AIID #859
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
Jun 2024 You.com, xAI, Perplexity, OpenAI, Mistral, Microsoft, Meta, John Mark Dougan, Inflection, Google, Anthropic high confidence

Package "anthropic" mentioned in incident; Company "Anthropic" in CVE description; Company "Anthropic" in CVE description

AIID #734
Alleged LLMjacking Targets AI Cloud Services with Stolen Credentials
May 2024 LLMjacking Attackers Exploiting Laravel, Entities engaging in Russian sanctions evasion high confidence

Package "anthropic" mentioned in incident

AIID #898
'Pravda' Network, Successor to 'Portal Kombat,' Allegedly Seeding AI Models with Kremlin Disinformation
Feb 2022 TigerWeb, Storm-1516, Russian state media, Pravda disinformation network, Portal Kombat, John Mark Dougan, Government of Russia high confidence

Package "anthropic" mentioned in incident; Company "Anthropic" in CVE description

AIID #968

Source: AI Incident Database (AIID)

Frequently Asked Questions

What is CVE-2026-34452?

The async filesystem memory tool in anthropic Python SDK 0.86.x allows a local attacker to escape the memory sandbox via a symlink swap between path validation and file use — a classic TOCTOU race. Upgrade to 0.87.0 immediately; if you cannot patch, switch to the synchronous memory tool (unaffected) as a stopgap. Blast radius is limited to local attackers with write access to the memory directory, but in shared or containerized agent environments this is a realistic threat.

Is CVE-2026-34452 actively exploited?

No confirmed active exploitation of CVE-2026-34452 has been reported, but organizations should still patch proactively.

How to fix CVE-2026-34452?

1. Patch immediately: upgrade anthropic Python SDK to 0.87.0 (pip install anthropic==0.87.0). 2. If patching is not immediately possible, switch from the async memory tool to the synchronous implementation — it is not vulnerable. 3. Restrict filesystem permissions on the memory directory: ensure only the agent process user can write to it, preventing symlink planting by other local users. 4. In containerized environments, enforce read-only mounts outside the memory directory and use user namespaces to reduce cross-process write access. 5. Audit logs for unexpected file access patterns outside the memory sandbox directory. 6. Scan your dependency lock files for anthropic >= 0.86.0 and < 0.87.0 across all services.

What systems are affected by CVE-2026-34452?

This vulnerability affects the following AI/ML architecture patterns: agent frameworks, LLM application backends, AI agent memory systems, multi-tenant AI inference environments.

What is the CVSS score for CVE-2026-34452?

No CVSS score has been assigned yet.

Technical Details

NVD Description

The Claude SDK for Python provides access to the Claude API from Python applications. From version 0.86.0 to before version 0.87.0, the async local filesystem memory tool in the Anthropic Python SDK validated that model-supplied paths resolved inside the sandboxed memory directory, but then returned the unresolved path for subsequent file operations. A local attacker able to write to the memory directory could retarget a symlink between validation and use, causing reads or writes to escape the sandbox. The synchronous memory tool implementation was not affected. This issue has been patched in version 0.87.0.

Exploitation Scenario

An attacker with local write access (e.g., a compromised container co-tenant, a malicious tool invoked by the agent, or a low-privilege service account on the same host) plants a symlink inside the memory directory pointing to a target outside the sandbox — for example, /app/.env or ~/.aws/credentials. When the async memory tool validates the path, the symlink resolves to a location inside the sandbox, passing the check. Before the subsequent file I/O operation executes, the attacker atomically replaces the symlink target to point to the sensitive file. The tool performs the read or write against the sensitive target. In an agent context, the attacker could use an agent-invokable tool to trigger this race, exfiltrating secrets or injecting malicious content into config files to escalate privileges.

Timeline

Published
March 31, 2026
Last Modified
April 1, 2026
First Seen
March 31, 2026

Related Vulnerabilities