Organizations using picklescan to validate PyTorch models or any pickle-based ML artifacts have a false sense of security — malicious models crafted with operator.methodcaller bypass detection entirely and execute arbitrary code on load. Patch to picklescan 0.0.33+ immediately. Any third-party or community model ingested through a picklescan-validated pipeline before this patch should be treated as unvalidated and potentially compromised.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| picklescan | pip | < 0.0.33 | 0.0.33 |
Do you use picklescan? You're affected.
Severity & Risk
Recommended Action
- 1. PATCH: Upgrade picklescan to >= 0.0.33 immediately — only definitive fix. 2. AUDIT: Re-scan all externally-sourced models previously validated by older picklescan versions; treat prior clean results as untrusted. 3. DEFENSE IN DEPTH: Prefer SafeTensors format over pickle for model storage — it eliminates the attack surface entirely. Where pickle is unavoidable, load models in isolated sandboxed processes with network egress controls. 4. DETECT: Search CI/CD and pipeline logs for picklescan runs on externally-sourced models prior to 0.0.33 and flag those ingestions for review. 5. POLICY: Enforce SafeTensors-first policy for model intake; document picklescan as a supplementary control, not a standalone gate.
Classification
Compliance Impact
This CVE is relevant to:
Technical Details
NVD Description
### Summary Picklescan uses `operator.methodcaller`, which is a built-in python library function to execute remote pickle files. ### Details The attack payload executes in the following steps: - First, the attacker crafts the payload by calling the `operator.methodcaller` function in method `reduce`. - Then, when the victim checks whether the pickle file is safe using the Picklescan library and this library doesn't detect any dangerous functions, they decide to use pickle.load() on this malicious pickle file, thus leading to remote code execution. ### PoC ``` import pickle import pickletools opcode1 = b'''cbuiltins __import__ (Vos tRp0 0coperator methodcaller (Vsystem Vecho "pwned by operator.methodcaller" tR(g0 tR.''' pickletools.dis(opcode1) pickle.loads(opcode1) ``` This PoC can't be easily created by pickle.dumps, therefore it was manually built. ### Impact Any organization or individual relying on picklescan to detect malicious pickle files inside PyTorch models. Attackers can embed malicious code in pickle file that remains undetected but executes when the pickle file is loaded. Attackers can distribute infected pickle files across ML models, APIs, or saved Python objects. ### Report by Pinji Chen (cpj24@mails.tsinghua.edu.cn) from the NISL lab (https://netsec.ccert.edu.cn/about) at Tsinghua University, Guanheng Liu (coolwind326@gmail.com).
Exploitation Scenario
An adversary publishes a malicious PyTorch model to HuggingFace or injects it via a compromised model repository. The victim's MLOps pipeline runs picklescan on the file before ingestion — picklescan reports no dangerous opcodes because the payload uses operator.methodcaller instead of flagged builtins like os.system. The victim calls torch.load() or pickle.load(), triggering arbitrary OS command execution (reverse shell, credential harvesting, lateral movement into training infrastructure). The attacker gains persistent access to the ML environment with the privileges of the workload — often a GPU node with broad data access.