GHSA-j343-8v2j-ff7w: picklescan: scanner bypass allows pickle-based RCE

GHSA-j343-8v2j-ff7w MEDIUM
Published August 26, 2025
CISO Take

picklescan below 0.0.30 fails to detect malicious pickle payloads crafted via Python's built-in idlelib module, silently passing files that execute arbitrary code on load. Any ML pipeline using picklescan as a safety gate for PyTorch models is directly exposed — this nullifies your primary defense. Upgrade to 0.0.30 immediately and re-scan all previously cleared artifacts.

Risk Assessment

Practical risk is higher than the medium CVSS suggests. The vulnerability subverts a dedicated security control rather than exploiting a generic application weakness, creating a dangerous false sense of safety. A working PoC is publicly available, lowering the exploitation bar to anyone with basic Python knowledge. Organizations in regulated sectors using picklescan to meet compliance requirements for model provenance checks are particularly exposed, as passing scans will be logged as evidence of due diligence that does not actually exist.

Affected Systems

Package Ecosystem Vulnerable Range Patched
picklescan pip < 0.0.30 0.0.30
401 3 dependents Pushed 2mo ago 95% patched ~12d to patch Full package profile →

Do you use picklescan? You're affected.

Severity & Risk

CVSS 3.1
N/A
EPSS
N/A
Exploitation Status
No known exploitation
Sophistication
Moderate

Recommended Action

5 steps
  1. Patch: upgrade picklescan to >= 0.0.30 across all environments and CI/CD agents.

  2. Re-scan: re-run updated picklescan against all model artifacts previously cleared by older versions — do not trust prior scan results.

  3. Sandbox model loading: load untrusted pickle files in isolated containers with no network egress and minimal filesystem access (seccomp profiles, read-only mounts).

  4. Migrate to SafeTensors: evaluate replacing pickle-based serialization with SafeTensors for all model storage and distribution — architecturally eliminates the class of attack.

  5. Detection: instrument Python processes that call pickle.load() and alert on unexpected child process spawning (os.system, subprocess) from model-loading code paths.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Art. 15 - Accuracy, robustness and cybersecurity Art. 9 - Risk management system
ISO 42001
A.6.2 - AI system risk management A.8.1 - AI system security
NIST AI RMF
MANAGE 2.2 - Risk treatment MEASURE 2.5 - AI system testing and evaluation
OWASP LLM Top 10
LLM03:2025 - Supply Chain Vulnerabilities

Frequently Asked Questions

What is GHSA-j343-8v2j-ff7w?

picklescan below 0.0.30 fails to detect malicious pickle payloads crafted via Python's built-in idlelib module, silently passing files that execute arbitrary code on load. Any ML pipeline using picklescan as a safety gate for PyTorch models is directly exposed — this nullifies your primary defense. Upgrade to 0.0.30 immediately and re-scan all previously cleared artifacts.

Is GHSA-j343-8v2j-ff7w actively exploited?

No confirmed active exploitation of GHSA-j343-8v2j-ff7w has been reported, but organizations should still patch proactively.

How to fix GHSA-j343-8v2j-ff7w?

1. Patch: upgrade picklescan to >= 0.0.30 across all environments and CI/CD agents. 2. Re-scan: re-run updated picklescan against all model artifacts previously cleared by older versions — do not trust prior scan results. 3. Sandbox model loading: load untrusted pickle files in isolated containers with no network egress and minimal filesystem access (seccomp profiles, read-only mounts). 4. Migrate to SafeTensors: evaluate replacing pickle-based serialization with SafeTensors for all model storage and distribution — architecturally eliminates the class of attack. 5. Detection: instrument Python processes that call pickle.load() and alert on unexpected child process spawning (os.system, subprocess) from model-loading code paths.

What systems are affected by GHSA-j343-8v2j-ff7w?

This vulnerability affects the following AI/ML architecture patterns: model serving, training pipelines, model registries, MLOps CI/CD pipelines.

What is the CVSS score for GHSA-j343-8v2j-ff7w?

No CVSS score has been assigned yet.

Technical Details

NVD Description

### Summary Using idlelib.pyshell.ModifiedInterpreter.runcommand function, which is a built-in python library function to execute remote pickle file. ### Details The attack payload executes in the following steps: First, the attacker craft the payload by calling to idlelib.pyshell.ModifiedInterpreter.runcommand function in reduce method Then when the victim after checking whether the pickle file is safe by using Picklescan library and this library doesn't dectect any dangerous functions, decide to pickle.load() this malicious pickle file, thus lead to remote code execution. ### PoC ``` from idlelib.pyshell import ModifiedInterpreter from types import SimpleNamespace class EvilIdlelibPyshellModifiedInterpreterRuncommand: def __reduce__(self): payload = "__import__('os').system('whoami')" fake_self = SimpleNamespace( locals={}, tkconsole=SimpleNamespace(executing=False), rpcclt=None, debugger=None ) return ModifiedInterpreter.runcommand, (fake_self, payload) ``` ### Impact Who is impacted? Any organization or individual relying on picklescan to detect malicious pickle files inside PyTorch models. What is the impact? Attackers can embed malicious code in pickle file that remains undetected but executes when the pickle file is loaded. Supply Chain Attack: Attackers can distribute infected pickle files across ML models, APIs, or saved Python objects. ### Corresponding https://github.com/FredericDT https://github.com/Qhaoduoyu

Exploitation Scenario

Attacker uploads a PyTorch model to a public or internal model registry. The model's pickle payload uses idlelib.pyshell.ModifiedInterpreter.runcommand inside a __reduce__ method — a function not on picklescan's blocklist. The victim's CI/CD pipeline pulls the model, runs picklescan (returns clean), and promotes the artifact to the model store. A training job or inference server calls torch.load() / pickle.load(), triggering execution of an embedded reverse shell or credential harvesting command. The attacker gains persistent access to ML infrastructure with no anomalous scan results to trigger alerts.

Timeline

Published
August 26, 2025
Last Modified
August 26, 2025
First Seen
March 24, 2026

Related Vulnerabilities