GHSA-7cq8-mj8x-j263: picklescan: detection bypass allows malicious pickle RCE
GHSA-7cq8-mj8x-j263 MEDIUMpicklescan is widely deployed as a trust gate before loading ML models — this bypass invalidates that control entirely. Any model scanned and cleared by picklescan < 0.0.29 should be considered untrusted. Upgrade to 0.0.29 immediately and treat pickle format as inherently unsafe regardless of scanning results; prefer safetensors for model serialization.
Risk Assessment
Medium CVSS understates the operational risk. This is a security control bypass — it doesn't just exploit a system, it defeats the defense layer teams rely on. Organizations using picklescan as their primary model validation mechanism are fully exposed to supply chain RCE with a false sense of security. Exploitability is moderate: requires knowledge of Python internals and picklescan's allowlist logic, but a working PoC is publicly available. Blast radius is high anywhere ML models are loaded from external or shared sources.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| picklescan | pip | < 0.0.29 | 0.0.29 |
Do you use picklescan? You're affected.
Severity & Risk
Recommended Action
1 step-
1) Upgrade picklescan to >= 0.0.29 immediately. 2) Migrate model serialization to safetensors format — it is pickle-free by design and eliminates this class of attack. 3) Never treat any pickle scanner as a security guarantee; pickle deserialization is fundamentally unsafe with untrusted inputs. 4) Audit any models scanned and loaded using picklescan < 0.0.29 — treat them as potentially compromised. 5) Implement allowlisting at the model registry level and restrict model loading to known-good SHA256 hashes. 6) Detection: monitor for unexpected process spawning (e.g.,
whoami, shell processes) from Python model-loading workers.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is GHSA-7cq8-mj8x-j263?
picklescan is widely deployed as a trust gate before loading ML models — this bypass invalidates that control entirely. Any model scanned and cleared by picklescan < 0.0.29 should be considered untrusted. Upgrade to 0.0.29 immediately and treat pickle format as inherently unsafe regardless of scanning results; prefer safetensors for model serialization.
Is GHSA-7cq8-mj8x-j263 actively exploited?
No confirmed active exploitation of GHSA-7cq8-mj8x-j263 has been reported, but organizations should still patch proactively.
How to fix GHSA-7cq8-mj8x-j263?
1) Upgrade picklescan to >= 0.0.29 immediately. 2) Migrate model serialization to safetensors format — it is pickle-free by design and eliminates this class of attack. 3) Never treat any pickle scanner as a security guarantee; pickle deserialization is fundamentally unsafe with untrusted inputs. 4) Audit any models scanned and loaded using picklescan < 0.0.29 — treat them as potentially compromised. 5) Implement allowlisting at the model registry level and restrict model loading to known-good SHA256 hashes. 6) Detection: monitor for unexpected process spawning (e.g., `whoami`, shell processes) from Python model-loading workers.
What systems are affected by GHSA-7cq8-mj8x-j263?
This vulnerability affects the following AI/ML architecture patterns: model serving, training pipelines, MLOps pipelines, model registries, data science workstations.
What is the CVSS score for GHSA-7cq8-mj8x-j263?
No CVSS score has been assigned yet.
Technical Details
NVD Description
### Summary Using idlelib.autocomplete.AutoComplete.fetch_completions, which is a built-in python library function to execute remote pickle file. ### Details The attack payload executes in the following steps: First, the attacker craft the payload by calling to idlelib.autocomplete.AutoComplete.fetch_completions function in reduce method Then when the victim after checking whether the pickle file is safe by using Picklescan library and this library doesn't dectect any dangerous functions, decide to pickle.load() this malicious pickle file, thus lead to remote code execution. ### PoC ``` class EvilIdlelibAutocompleteFetchCompletions: def __reduce__(self): from idlelib.autocomplete import AutoComplete, ATTRS return AutoComplete().fetch_completions, ("__import__('os').system('whoami')", ATTRS) ``` ### Impact Who is impacted? Any organization or individual relying on picklescan to detect malicious pickle files inside PyTorch models. What is the impact? Attackers can embed malicious code in pickle file that remains undetected but executes when the pickle file is loaded. Supply Chain Attack: Attackers can distribute infected pickle files across ML models, APIs, or saved Python objects. ### Corresponding https://github.com/FredericDT https://github.com/Qhaoduoyu
Exploitation Scenario
Adversary uploads a malicious PyTorch model to a public registry (HuggingFace, GitHub, internal artifact store). The model's pickle file encodes a `__reduce__` method that calls `idlelib.autocomplete.AutoComplete().fetch_completions` with an OS command as argument — a Python stdlib function that picklescan does not flag. The target organization's MLOps pipeline runs picklescan as a pre-load validation step; the scan returns clean. The model is loaded into a training or inference worker, triggering RCE. The attacker gains code execution in the ML infrastructure — typically a privileged environment with GPU access, cloud credentials, and access to training data.
References
Timeline
Related Vulnerabilities
GHSA-vvpj-8cmc-gx39 10.0 picklescan: security flaw enables exploitation
Same package: picklescan GHSA-g38g-8gr9-h9xp 9.8 picklescan: Allowlist Bypass evades input filtering
Same package: picklescan GHSA-7wx9-6375-f5wh 9.8 picklescan: Allowlist Bypass evades input filtering
Same package: picklescan CVE-2025-1945 9.8 picklescan: ZIP flag bypass enables RCE in PyTorch models
Same package: picklescan GHSA-hgrh-qx5j-jfwx 8.8 picklescan: Protection Bypass circumvents security controls
Same package: picklescan
AI Threat Alert