GHSA-p9w7-82w4-7q8m: picklescan: detection bypass allows pickle RCE in ML pipelines

GHSA-p9w7-82w4-7q8m MEDIUM
Published August 26, 2025
CISO Take

picklescan is widely used as the security gate for PyTorch model files — this bypass renders that gate useless. Any organization scanning pickle files with picklescan < 0.0.30 before loading them should treat all previously scanned models as unverified and update immediately. This is a supply chain attack vector: malicious models hosted on public hubs pass the scan and execute arbitrary code silently on load.

Risk Assessment

Practical risk is HIGH despite the medium CVSS designation. The exploit is trivial to reproduce (PoC is public), the affected tool (picklescan) is the de facto standard for ML model security scanning, and the impact is full RCE on the loading system. The false-negative nature is particularly dangerous: teams believe they are protected when they are not. Exposure is broad — any ML team using picklescan in CI/CD, Jupyter workflows, or model serving pipelines is silently vulnerable.

Affected Systems

Package Ecosystem Vulnerable Range Patched
picklescan pip < 0.0.30 0.0.30
401 3 dependents Pushed 2mo ago 95% patched ~12d to patch Full package profile →

Do you use picklescan? You're affected.

Severity & Risk

CVSS 3.1
N/A
EPSS
N/A
Exploitation Status
No known exploitation
Sophistication
Trivial

Recommended Action

5 steps
  1. PATCH

    Upgrade picklescan to >= 0.0.30 immediately. This is the only fix.

  2. AUDIT

    Re-scan any pickle files previously cleared by picklescan < 0.0.30, especially those from external sources.

  3. DEFENSE-IN-DEPTH: Do not rely on picklescan alone — add sandboxed model loading (use RestrictedUnpickler or load in an isolated container/VM), restrict pickle loading to signed artifacts from trusted sources only.

  4. DETECT

    Monitor for unexpected process spawning from Python processes loading model files (e.g., child processes from pickle.load calls).

  5. BLOCK

    If you cannot patch immediately, block loading of pickle files from untrusted sources at the policy level.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Art.9 - Risk Management System
ISO 42001
A.6.2.6 - AI system supply chain security
NIST AI RMF
GOVERN-6.1 - Policies and procedures are in place for third-party AI risks MANAGE-2.2 - Mechanisms for monitoring AI risks are implemented
OWASP LLM Top 10
LLM03:2025 - Supply Chain Vulnerabilities

Frequently Asked Questions

What is GHSA-p9w7-82w4-7q8m?

picklescan is widely used as the security gate for PyTorch model files — this bypass renders that gate useless. Any organization scanning pickle files with picklescan < 0.0.30 before loading them should treat all previously scanned models as unverified and update immediately. This is a supply chain attack vector: malicious models hosted on public hubs pass the scan and execute arbitrary code silently on load.

Is GHSA-p9w7-82w4-7q8m actively exploited?

No confirmed active exploitation of GHSA-p9w7-82w4-7q8m has been reported, but organizations should still patch proactively.

How to fix GHSA-p9w7-82w4-7q8m?

1. PATCH: Upgrade picklescan to >= 0.0.30 immediately. This is the only fix. 2. AUDIT: Re-scan any pickle files previously cleared by picklescan < 0.0.30, especially those from external sources. 3. DEFENSE-IN-DEPTH: Do not rely on picklescan alone — add sandboxed model loading (use RestrictedUnpickler or load in an isolated container/VM), restrict pickle loading to signed artifacts from trusted sources only. 4. DETECT: Monitor for unexpected process spawning from Python processes loading model files (e.g., child processes from pickle.load calls). 5. BLOCK: If you cannot patch immediately, block loading of pickle files from untrusted sources at the policy level.

What systems are affected by GHSA-p9w7-82w4-7q8m?

This vulnerability affects the following AI/ML architecture patterns: model serving, training pipelines, ML model registries, data science notebooks, CI/CD ML pipelines.

What is the CVSS score for GHSA-p9w7-82w4-7q8m?

No CVSS score has been assigned yet.

Technical Details

NVD Description

### Summary Using lib2to3.pgen2.pgen.ParserGenerator.make_label function, which is a built-in python library function to execute remote pickle file. ### Details The attack payload executes in the following steps: First, the attacker craft the payload by calling to lib2to3.pgen2.pgen.ParserGenerator.make_label function in reduce method Then when the victim after checking whether the pickle file is safe by using Picklescan library and this library doesn't dectect any dangerous functions, decide to pickle.load() this malicious pickle file, thus lead to remote code execution. ### PoC ``` from types import SimpleNamespace from lib2to3.pgen2.pgen import ParserGenerator class EvilLib2to3Pgen2ParserGeneratorMakeLabel: def __reduce__(self): c = SimpleNamespace(labels=[], keywords={}, tokens={}) label = '""+__import__(\'os\').system(\'whoami\')' return ParserGenerator.make_label, (None, c, label) ``` ### Impact Who is impacted? Any organization or individual relying on picklescan to detect malicious pickle files inside PyTorch models. What is the impact? Attackers can embed malicious code in pickle file that remains undetected but executes when the pickle file is loaded. Supply Chain Attack: Attackers can distribute infected pickle files across ML models, APIs, or saved Python objects. ### Corresponding https://github.com/FredericDT https://github.com/Qhaoduoyu

Exploitation Scenario

An adversary targets an ML team that uses picklescan to validate PyTorch models before loading. The attacker crafts a malicious .pkl file using the lib2to3.pgen2.pgen.ParserGenerator.make_label gadget — a built-in Python stdlib function that picklescan does not flag. The payload is uploaded to a public model repository (e.g., HuggingFace) or embedded in a poisoned model shared via a community forum. The victim's automated pipeline scans the file with picklescan (result: clean), proceeds to torch.load() or pickle.load(), and the payload executes — running arbitrary OS commands (e.g., establishing a reverse shell, exfiltrating API keys, or pivoting to the training cluster).

Timeline

Published
August 26, 2025
Last Modified
August 26, 2025
First Seen
March 24, 2026

Related Vulnerabilities