GHSA-6w4w-5w54-rjvr: picklescan: detection bypass allows RCE via ML model files

GHSA-6w4w-5w54-rjvr MEDIUM
Published August 26, 2025
CISO Take

picklescan is widely used as the security gate before loading PyTorch and other ML model files — this bypass means models that passed your scan may still execute arbitrary code on load. Update picklescan to 0.0.29 immediately and treat all models previously cleared by older versions as untrusted. If you use ML models from external sources or shared repositories, this is a critical supply chain exposure regardless of the medium CVSS score.

Risk Assessment

CVSS score understates real-world risk. picklescan is a dedicated ML security scanner; a bypass here nullifies a primary defense layer in ML pipelines. Exploitability is moderate — the technique requires knowledge of Python internals and pickle serialization, but a working PoC is publicly available. Blast radius is high for any organization ingesting third-party PyTorch models, shared model artifacts, or pickle-serialized Python objects. Teams with model-sharing workflows (Hugging Face, internal model registries) face the highest exposure.

Affected Systems

Package Ecosystem Vulnerable Range Patched
picklescan pip < 0.0.29 0.0.29
401 3 dependents Pushed 2mo ago 95% patched ~12d to patch Full package profile →

Do you use picklescan? You're affected.

Severity & Risk

CVSS 3.1
N/A
EPSS
N/A
Exploitation Status
No known exploitation
Sophistication
Moderate

Recommended Action

6 steps
  1. PATCH

    Update picklescan to >= 0.0.29 immediately. Verify with pip show picklescan.

  2. RE-SCAN: Re-validate all model files previously cleared by older picklescan versions — prior clearances are invalid.

  3. DISTRUST

    Treat any model loaded before this patch as potentially compromised; audit execution logs for unexpected process spawns.

  4. MIGRATE

    Prefer safetensors format over pickle for model storage — it is structurally safe from code execution.

  5. DETECT

    Monitor for idlelib.autocomplete imports in process trees spawned by model loading scripts.

  6. GATE

    Add a secondary check (e.g., manual review of __reduce__ chains) for models from untrusted sources.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Art.15 - Accuracy, robustness and cybersecurity
ISO 42001
A.10.1 - AI supply chain security A.9.3 - AI system security testing
NIST AI RMF
GOVERN-6.2 - AI supply chain policies MANAGE-2.2 - Third-party AI risk management
OWASP LLM Top 10
LLM03 - Supply Chain Vulnerabilities

Frequently Asked Questions

What is GHSA-6w4w-5w54-rjvr?

picklescan is widely used as the security gate before loading PyTorch and other ML model files — this bypass means models that passed your scan may still execute arbitrary code on load. Update picklescan to 0.0.29 immediately and treat all models previously cleared by older versions as untrusted. If you use ML models from external sources or shared repositories, this is a critical supply chain exposure regardless of the medium CVSS score.

Is GHSA-6w4w-5w54-rjvr actively exploited?

No confirmed active exploitation of GHSA-6w4w-5w54-rjvr has been reported, but organizations should still patch proactively.

How to fix GHSA-6w4w-5w54-rjvr?

1. PATCH: Update picklescan to >= 0.0.29 immediately. Verify with `pip show picklescan`. 2. RE-SCAN: Re-validate all model files previously cleared by older picklescan versions — prior clearances are invalid. 3. DISTRUST: Treat any model loaded before this patch as potentially compromised; audit execution logs for unexpected process spawns. 4. MIGRATE: Prefer safetensors format over pickle for model storage — it is structurally safe from code execution. 5. DETECT: Monitor for `idlelib.autocomplete` imports in process trees spawned by model loading scripts. 6. GATE: Add a secondary check (e.g., manual review of `__reduce__` chains) for models from untrusted sources.

What systems are affected by GHSA-6w4w-5w54-rjvr?

This vulnerability affects the following AI/ML architecture patterns: training pipelines, model serving, MLOps CI/CD pipelines, model registries, data science workstations.

What is the CVSS score for GHSA-6w4w-5w54-rjvr?

No CVSS score has been assigned yet.

Technical Details

NVD Description

### Summary Using idlelib.autocomplete.AutoComplete.get_entity, which is a built-in python library function to execute remote pickle file. ### Details The attack payload executes in the following steps: First, the attacker craft the payload by calling to idlelib.autocomplete.AutoComplete.get_entity function in reduce method Then when the victim after checking whether the pickle file is safe by using Picklescan library and this library doesn't dectect any dangerous functions, decide to pickle.load() this malicious pickle file, thus lead to remote code execution. ### PoC ``` class EvilIdlelibAutocompleteGetEntity: def __reduce__(self): from idlelib.autocomplete import AutoComplete return AutoComplete().get_entity, ("__import__('os').system('whoami')",) ``` ### Impact Who is impacted? Any organization or individual relying on picklescan to detect malicious pickle files inside PyTorch models. What is the impact? Attackers can embed malicious code in pickle file that remains undetected but executes when the pickle file is loaded. Supply Chain Attack: Attackers can distribute infected pickle files across ML models, APIs, or saved Python objects. ### Corresponding https://github.com/FredericDT https://github.com/Qhaoduoyu

Exploitation Scenario

An adversary crafts a malicious PyTorch model embedding the `idlelib.autocomplete.AutoComplete.get_entity` payload in a `__reduce__` method. The model is published to a public Hugging Face repository or injected into an internal model registry via a compromised contributor account. A victim's MLOps pipeline scans the model with picklescan < 0.0.29, receives a clean result, and proceeds to load the model in a training or inference environment. On `pickle.load()`, the payload executes — spawning a shell, exfiltrating credentials, or establishing persistence inside the ML infrastructure. Because the vector is a 'safe' model file, security teams may not correlate the process execution with the model load.

Timeline

Published
August 26, 2025
Last Modified
August 26, 2025
First Seen
March 24, 2026

Related Vulnerabilities