CVE-2026-34450: anthropic-sdk: insecure file perms expose agent memory

GHSA-q5f5-3gjm-7mfm MEDIUM
Published March 31, 2026
CISO Take

Upgrade the anthropic Python SDK to 0.87.0 immediately if you use the filesystem memory tool. Docker deployments face the highest risk — permissive default umasks make memory files world-writable, allowing any co-resident process to tamper with agent state and silently poison future model context. As an interim control, set explicit umask restrictions in your Dockerfiles and audit existing memory file permissions.

What is the risk?

Medium severity with elevated risk in containerized environments. EPSS is near-zero (0.00012) and exploitation requires local access, limiting remote attack surface. However, Docker base images commonly ship with permissive umasks, making the write primitive trivially available to any co-resident service or compromised process. The ability to modify agent memory — effectively injecting false context into future model interactions — elevates impact well beyond simple data disclosure.

What systems are affected?

Package Ecosystem Vulnerable Range Patched
anthropic pip >= 0.86.0, < 0.87.0 0.87.0
3.4K 4.8K dependents Pushed 7d ago 80% patched ~1d to patch Full package profile →

Do you use anthropic? You're affected.

Severity & Risk

CVSS 3.1
N/A
EPSS
0.0%
chance of exploitation in 30 days
Higher than 1% of all CVEs
Exploitation Status
No known exploitation
Sophistication
Trivial

What should I do?

5 steps
  1. PATCH

    Upgrade anthropic SDK to 0.87.0 immediately.

  2. AUDIT

    Locate exposed memory files with find . -name '*.json' -perm /o+rw; restrict with chmod 600.

  3. HARDEN

    Set explicit umask (0o077 or stricter) in Dockerfiles and container entrypoint scripts.

  4. DETECT

    Monitor memory file modification timestamps for unexpected writes outside normal agent process context; alert on anomalies.

  5. ROTATE

    Treat existing memory files as potentially compromised — purge and recreate agent state if memory files were accessible to other processes.

CISA SSVC Assessment

Decision Track
Exploitation none
Automatable No
Technical Impact partial

Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Art. 9 - Risk Management System
ISO 42001
A.8.4 - Data Access Controls for AI Systems
NIST AI RMF
MANAGE-2.2 - Risk Treatment for Identified AI Risks
OWASP LLM Top 10
LLM02 - Sensitive Information Disclosure LLM04 - Data and Model Poisoning

Related AI Incidents (17)

Claude Code Agent Reportedly Deleted DataTalks.Club Production Infrastructure, Database, and Snapshots via Terraform
Feb 2026 Alexey Grigorev high confidence

Package "anthropic" mentioned in incident; Company "Anthropic" in CVE description

AIID #1424
Anthropic Said DeepSeek, Moonshot, and MiniMax Used Fraudulent Accounts and Proxies to Illicitly Distill Claude Capabilities at Scale
Feb 2026 Proxy reseller services, Moonshot AI, MiniMax, DeepSeek high confidence

Package "anthropic" mentioned in incident; Company "Anthropic" in CVE description

AIID #1395
Anthropic Claude AI Agent Reportedly Caused Financial Losses While Operating Office Vending Machine at Wall Street Journal Headquarters
Dec 2025 The Wall Street Journal, Andon Labs high confidence

Package "anthropic" mentioned in incident; Company "Anthropic" in CVE description

AIID #1313
Anthropic's Claude Was Reportedly Jailbroken To Allegedly Help Steal Sensitive Mexican Government Data
Dec 2025 Unknown hacker high confidence

Package "anthropic" mentioned in incident; Company "Anthropic" in CVE description

AIID #1430
Chinese State-Linked Operator (GTG-1002) Reportedly Uses Claude Code for Autonomous Cyber Espionage
Nov 2025 Unknown Chinese state-sponsored entity, State-linked operator using autonomous AI-enabled intrusion workflows, GTG-1002 high confidence

Package "anthropic" mentioned in incident; Company "Anthropic" in CVE description

AIID #1263
Anthropic Reportedly Identifies AI Misuse in Extortion Campaigns, North Korean IT Schemes, and Ransomware Sales
Aug 2025 Unknown cybercriminals, Ransomware-as-a-service actors, North Korean IT operatives, Government of North Korea high confidence

Package "anthropic" mentioned in incident; Company "Anthropic" in CVE description

AIID #1201
Malicious Nx npm Packages Reportedly Weaponize AI Coding Agents for Data Exfiltration
Aug 2025 Malicious actors compromising Nx’s CI/CD pipeline and publishing tainted npm packages high confidence

Package "anthropic" mentioned in incident; Company "Anthropic" in CVE description

AIID #1210
Reported Public Exposure of Over 100,000 LLM Conversations via Share Links Indexed by Search Engines and Archived
Jul 2025 xAI, OpenAI, Mistral, Microsoft, Anthropic, Alibaba high confidence

Package "anthropic" mentioned in incident; Company "Anthropic" in CVE description; Company "Anthropic" in CVE description

AIID #1186
Citation Errors in Concord Music v. Anthropic Attributed to Claude AI Use by Defense Counsel
May 2025 Anthropic high confidence

Package "anthropic" mentioned in incident; Company "Anthropic" in CVE description; Company "Anthropic" in CVE description

AIID #1074
Anthropic Report Details Claude Misuse for Influence Operations, Credential Stuffing, Recruitment Fraud, and Malware Development
Apr 2025 Unknown malicious actors, Unknown cybercriminals, Influence-as-a-service operators high confidence

Package "anthropic" mentioned in incident; Company "Anthropic" in CVE description

AIID #1054
Reported Emergence of 'Vegetative Electron Microscopy' in Scientific Papers Traced to Purported AI Training Data Contamination
Apr 2025 Scientific authors, Researchers, OpenAI, Anthropic high confidence

Package "anthropic" mentioned in incident; Company "Anthropic" in CVE description; Company "Anthropic" in CVE description

AIID #1044
Multiple LLMs Allegedly Endorsed Suicide as a Viable Option During Non-Adversarial Mental Health Venting Session
Apr 2025 OpenAI, DeepSeek AI, Anthropic high confidence

Package "anthropic" mentioned in incident; Company "Anthropic" in CVE description; Company "Anthropic" in CVE description

AIID #1026
At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors
Mar 2025 Unidentified online communities using chatbots, Spicy Chat, JanitorAI, CrushOn.AI, Chub AI, Character.AI high confidence

Package "anthropic" mentioned in incident; Company "Anthropic" in CVE description

AIID #975
AI Models Reportedly Found to Provide Misinformation on Election Processes in Spanish
Oct 2024 OpenAI, Mistral, Meta, Google, Anthropic high confidence

Package "anthropic" mentioned in incident; Company "Anthropic" in CVE description; Company "Anthropic" in CVE description

AIID #859
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
Jun 2024 You.com, xAI, Perplexity, OpenAI, Mistral, Microsoft, Meta, John Mark Dougan, Inflection, Google, Anthropic high confidence

Package "anthropic" mentioned in incident; Company "Anthropic" in CVE description; Company "Anthropic" in CVE description

AIID #734
Alleged LLMjacking Targets AI Cloud Services with Stolen Credentials
May 2024 LLMjacking Attackers Exploiting Laravel, Entities engaging in Russian sanctions evasion high confidence

Package "anthropic" mentioned in incident

AIID #898
'Pravda' Network, Successor to 'Portal Kombat,' Allegedly Seeding AI Models with Kremlin Disinformation
Feb 2022 TigerWeb, Storm-1516, Russian state media, Pravda disinformation network, Portal Kombat, John Mark Dougan, Government of Russia high confidence

Package "anthropic" mentioned in incident; Company "Anthropic" in CVE description

AIID #968

Source: AI Incident Database (AIID)

Frequently Asked Questions

What is CVE-2026-34450?

Upgrade the anthropic Python SDK to 0.87.0 immediately if you use the filesystem memory tool. Docker deployments face the highest risk — permissive default umasks make memory files world-writable, allowing any co-resident process to tamper with agent state and silently poison future model context. As an interim control, set explicit umask restrictions in your Dockerfiles and audit existing memory file permissions.

Is CVE-2026-34450 actively exploited?

No confirmed active exploitation of CVE-2026-34450 has been reported, but organizations should still patch proactively.

How to fix CVE-2026-34450?

1. PATCH: Upgrade anthropic SDK to 0.87.0 immediately. 2. AUDIT: Locate exposed memory files with `find . -name '*.json' -perm /o+rw`; restrict with `chmod 600`. 3. HARDEN: Set explicit umask (0o077 or stricter) in Dockerfiles and container entrypoint scripts. 4. DETECT: Monitor memory file modification timestamps for unexpected writes outside normal agent process context; alert on anomalies. 5. ROTATE: Treat existing memory files as potentially compromised — purge and recreate agent state if memory files were accessible to other processes.

What systems are affected by CVE-2026-34450?

This vulnerability affects the following AI/ML architecture patterns: agent frameworks, containerized AI workloads, multi-tenant AI deployments, AI agent pipelines with persistent memory.

What is the CVSS score for CVE-2026-34450?

No CVSS score has been assigned yet.

Technical Details

NVD Description

The Claude SDK for Python provides access to the Claude API from Python applications. From version 0.86.0 to before version 0.87.0, the local filesystem memory tool in the Anthropic Python SDK created memory files with mode 0o666, leaving them world-readable on systems with a standard umask and world-writable in environments with a permissive umask such as many Docker base images. A local attacker on a shared host could read persisted agent state, and in containerized deployments could modify memory files to influence subsequent model behavior. Both the synchronous and asynchronous memory tool implementations were affected. This issue has been patched in version 0.87.0.

Exploitation Scenario

In a Dockerized multi-agent deployment, a compromised microservice running in the same container reads agent memory files (0o666 permissions) to harvest conversation history and prior API context. The attacker then writes poisoned entries to the memory file, injecting fabricated prior interactions that instruct the agent to exfiltrate data via tool calls or bypass content controls on subsequent requests. Because memory is loaded as trusted context at session startup, the agent processes injected instructions without user or operator visibility. The attack requires no network access, no authentication bypass, and produces no API-layer audit log entries.

Timeline

Published
March 31, 2026
Last Modified
April 1, 2026
First Seen
March 31, 2026

Related Vulnerabilities