Picklescan, the de facto ML model security scanner for PyTorch, can be bypassed using a crafted pickle payload leveraging _operator.methodcaller — meaning any model scan returning 'clean' is untrustworthy until you upgrade to v0.0.34. If your MLOps pipeline uses picklescan as a security gate before loading external models, assume that gate is currently defeated. Patch immediately and treat any externally-sourced model loaded in the past 90 days as suspect.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| picklescan | pip | < 0.0.34 | 0.0.34 |
Do you use picklescan? You're affected.
Severity & Risk
Recommended Action
- 1) PATCH: Upgrade picklescan to >= 0.0.34 immediately — this is a one-line fix in requirements.txt or Pipfile. 2) VERIFY: Audit which pipelines use picklescan and confirm the patched version is deployed, not a cached older version in a Docker layer. 3) DEFENSE-IN-DEPTH: Migrate model serialization to SafeTensors format (Hugging Face) which does not use pickle — this eliminates the attack surface class entirely. 4) SANDBOX: Run all pickle.load() calls inside isolated containers or VMs with no network egress and minimal filesystem access; treat model loading as untrusted code execution. 5) DETECT: Add egress monitoring on training/inference hosts for unexpected outbound connections during model loading. 6) POLICY: Enforce model provenance controls — only load models from cryptographically signed, internally-approved sources; block direct downloads from public hubs into production.
Classification
Compliance Impact
This CVE is relevant to:
Technical Details
NVD Description
### Summary Picklescan uses _operator.methodcaller, which is a built-in python library function to execute remote pickle files. ### Details The attack payload executes in the following steps: - First, the attacker crafts the payload by calling to _operator.methodcaller function in reduce method - Then when after the victim after checks whether or not the pickle file is safe by using Picklescan library and the library doesn't dectect any dangerous functions, pickle.load() loads this malicious pickle file, thus lead to remote code execution. ### PoC ``` import pickle import pickletools opcode2 = b'''cbuiltins __import__ (Vos tRp0 0c_operator methodcaller (Vsystem Vecho "pwned by _operator.methodcaller" tR(g0 tR.''' pickletools.dis(opcode2) pickle.loads(opcode2) ``` This PoC can't be easily create by pickle.dumps, so it was manually built. ### Impact Any organization or individual relying on picklescan to detect malicious pickle files inside PyTorch models. Attackers can embed malicious code in pickle file that remains undetected but executes when the pickle file is loaded. Attackers can distribute infected pickle files across ML models, APIs, or saved Python objects. ### Report by Pinji Chen (cpj24@mails.tsinghua.edu.cn) from NISL lab (https://netsec.ccert.edu.cn/about) at Tsinghua University, Guanheng Liu (coolwind326@gmail.com).
Exploitation Scenario
An adversary targeting an ML team at a financial institution crafts a malicious PyTorch model containing the _operator.methodcaller payload — a reverse shell or credential harvester — embedded in the pickle bytecode. They publish it to a public model hub under a convincing model name (e.g., a fine-tuned LLM for finance). The target team's MLOps pipeline automatically pulls the model, runs picklescan (returns clean), and loads it into their inference server. On pickle.load(), the payload executes, establishing a reverse shell from the inference server — which has IAM roles, database credentials, and internal network access — to the attacker's C2. The entire attack chain is invisible to the security scanner the team trusted.