Upgrade the anthropic Python SDK to 0.87.0 immediately if you use the filesystem memory tool. Docker deployments face the highest risk — permissive default umasks make memory files world-writable, allowing any co-resident process to tamper with agent state and silently poison future model context. As an interim control, set explicit umask restrictions in your Dockerfiles and audit existing memory file permissions.
What is the risk?
Medium severity with elevated risk in containerized environments. EPSS is near-zero (0.00012) and exploitation requires local access, limiting remote attack surface. However, Docker base images commonly ship with permissive umasks, making the write primitive trivially available to any co-resident service or compromised process. The ability to modify agent memory — effectively injecting false context into future model interactions — elevates impact well beyond simple data disclosure.
What systems are affected?
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| anthropic | pip | >= 0.86.0, < 0.87.0 | 0.87.0 |
Do you use anthropic? You're affected.
Severity & Risk
What should I do?
5 steps-
PATCH
Upgrade anthropic SDK to 0.87.0 immediately.
-
AUDIT
Locate exposed memory files with
find . -name '*.json' -perm /o+rw; restrict withchmod 600. -
HARDEN
Set explicit umask (0o077 or stricter) in Dockerfiles and container entrypoint scripts.
-
DETECT
Monitor memory file modification timestamps for unexpected writes outside normal agent process context; alert on anomalies.
-
ROTATE
Treat existing memory files as potentially compromised — purge and recreate agent state if memory files were accessible to other processes.
CISA SSVC Assessment
Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.
Classification
Compliance Impact
This CVE is relevant to:
Related AI Incidents (17)
Package "anthropic" mentioned in incident; Company "Anthropic" in CVE description
Package "anthropic" mentioned in incident; Company "Anthropic" in CVE description
Package "anthropic" mentioned in incident; Company "Anthropic" in CVE description
Package "anthropic" mentioned in incident; Company "Anthropic" in CVE description
Package "anthropic" mentioned in incident; Company "Anthropic" in CVE description
Package "anthropic" mentioned in incident; Company "Anthropic" in CVE description
Package "anthropic" mentioned in incident; Company "Anthropic" in CVE description
Package "anthropic" mentioned in incident; Company "Anthropic" in CVE description; Company "Anthropic" in CVE description
Package "anthropic" mentioned in incident; Company "Anthropic" in CVE description; Company "Anthropic" in CVE description
Package "anthropic" mentioned in incident; Company "Anthropic" in CVE description
Package "anthropic" mentioned in incident; Company "Anthropic" in CVE description; Company "Anthropic" in CVE description
Package "anthropic" mentioned in incident; Company "Anthropic" in CVE description; Company "Anthropic" in CVE description
Package "anthropic" mentioned in incident; Company "Anthropic" in CVE description
Package "anthropic" mentioned in incident; Company "Anthropic" in CVE description; Company "Anthropic" in CVE description
Package "anthropic" mentioned in incident; Company "Anthropic" in CVE description; Company "Anthropic" in CVE description
Package "anthropic" mentioned in incident
Package "anthropic" mentioned in incident; Company "Anthropic" in CVE description
Source: AI Incident Database (AIID)
Frequently Asked Questions
What is CVE-2026-34450?
Upgrade the anthropic Python SDK to 0.87.0 immediately if you use the filesystem memory tool. Docker deployments face the highest risk — permissive default umasks make memory files world-writable, allowing any co-resident process to tamper with agent state and silently poison future model context. As an interim control, set explicit umask restrictions in your Dockerfiles and audit existing memory file permissions.
Is CVE-2026-34450 actively exploited?
No confirmed active exploitation of CVE-2026-34450 has been reported, but organizations should still patch proactively.
How to fix CVE-2026-34450?
1. PATCH: Upgrade anthropic SDK to 0.87.0 immediately. 2. AUDIT: Locate exposed memory files with `find . -name '*.json' -perm /o+rw`; restrict with `chmod 600`. 3. HARDEN: Set explicit umask (0o077 or stricter) in Dockerfiles and container entrypoint scripts. 4. DETECT: Monitor memory file modification timestamps for unexpected writes outside normal agent process context; alert on anomalies. 5. ROTATE: Treat existing memory files as potentially compromised — purge and recreate agent state if memory files were accessible to other processes.
What systems are affected by CVE-2026-34450?
This vulnerability affects the following AI/ML architecture patterns: agent frameworks, containerized AI workloads, multi-tenant AI deployments, AI agent pipelines with persistent memory.
What is the CVSS score for CVE-2026-34450?
No CVSS score has been assigned yet.
Technical Details
NVD Description
The Claude SDK for Python provides access to the Claude API from Python applications. From version 0.86.0 to before version 0.87.0, the local filesystem memory tool in the Anthropic Python SDK created memory files with mode 0o666, leaving them world-readable on systems with a standard umask and world-writable in environments with a permissive umask such as many Docker base images. A local attacker on a shared host could read persisted agent state, and in containerized deployments could modify memory files to influence subsequent model behavior. Both the synchronous and asynchronous memory tool implementations were affected. This issue has been patched in version 0.87.0.
Exploitation Scenario
In a Dockerized multi-agent deployment, a compromised microservice running in the same container reads agent memory files (0o666 permissions) to harvest conversation history and prior API context. The attacker then writes poisoned entries to the memory file, injecting fabricated prior interactions that instruct the agent to exfiltrate data via tool calls or bypass content controls on subsequent requests. Because memory is loaded as trusted context at session startup, the agent processes injected instructions without user or operator visibility. The attack requires no network access, no authentication bypass, and produces no API-layer audit log entries.
Weaknesses (CWE)
References
- github.com/advisories/GHSA-q5f5-3gjm-7mfm
- nvd.nist.gov/vuln/detail/CVE-2026-34450
- github.com/anthropics/anthropic-sdk-python/commit/715030ceb4d6dd8d3546e999c680e29532bf1255
- github.com/anthropics/anthropic-sdk-python/releases/tag/v0.87.0
- github.com/anthropics/anthropic-sdk-python/security/advisories/GHSA-q5f5-3gjm-7mfm
Timeline
Related Vulnerabilities
CVE-2026-45370 7.7 utcp-cli: env leak exfiltrates all agent process secrets
Same package: anthropic CVE-2026-21852 7.5 claude_code: Weak Credentials allow account compromise
Same package: anthropic CVE-2026-42074 openclaude: sandbox bypass allows host-level RCE
Same package: anthropic CVE-2026-34452 Anthropic SDK: TOCTOU symlink escape in async memory tool
Same package: anthropic CVE-2025-5120 10.0 smolagents: sandbox escape enables unauthenticated RCE
Same attack type: Data Leakage