GHSA-6vqj-c2q5-j97w: picklescan: scanner bypass enables RCE via ML models

GHSA-6vqj-c2q5-j97w MEDIUM
Published August 26, 2025
CISO Take

picklescan is widely trusted as the safety gate before loading PyTorch and other ML models — this bypass completely invalidates that control. Any pipeline that loads pickle files after a picklescan clean bill of health is vulnerable to RCE. Upgrade picklescan to 0.0.29 immediately and treat all previously scanned pickle files as untrusted until re-scanned with the patched version.

Risk Assessment

High operational risk despite medium CVSS. The vulnerability is particularly dangerous because it subverts a security control rather than exploiting a direct weakness — organizations that trust picklescan have a false sense of security. Exploitability is moderate (requires knowledge of Python internals), but the PoC is trivial to reproduce, lowering the barrier for threat actors. ML model sharing workflows, model registries (internal and public), and MLOps pipelines that automate model loading are the primary exposure surface.

Affected Systems

Package Ecosystem Vulnerable Range Patched
picklescan pip < 0.0.29 0.0.29
401 3 dependents Pushed 2mo ago 95% patched ~12d to patch Full package profile →

Do you use picklescan? You're affected.

Severity & Risk

CVSS 3.1
N/A
EPSS
N/A
Exploitation Status
No known exploitation
Sophistication
Moderate

Recommended Action

6 steps
  1. Upgrade picklescan to >= 0.0.29 immediately across all environments.

  2. Re-scan any pickle files previously cleared by older versions — the clean scan is no longer trustworthy.

  3. Migrate PyTorch models to safetensors format where possible (eliminates pickle deserialization risk entirely).

  4. Implement defense-in-depth: never rely solely on picklescan — run models in isolated containers or sandboxes before loading in production.

  5. Audit model ingestion pipelines for any automated load-after-scan patterns.

  6. For detection: monitor for unexpected process spawns or network connections originating from Python processes that load model files.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Art.9 / Art.17 - Risk management system / Quality management — supply chain
ISO 42001
A.6.1.4 - AI supply chain management
NIST AI RMF
MANAGE 2.2 - Risk treatments including response and recovery
OWASP LLM Top 10
LLM03:2025 - Supply Chain Vulnerabilities

Frequently Asked Questions

What is GHSA-6vqj-c2q5-j97w?

picklescan is widely trusted as the safety gate before loading PyTorch and other ML models — this bypass completely invalidates that control. Any pipeline that loads pickle files after a picklescan clean bill of health is vulnerable to RCE. Upgrade picklescan to 0.0.29 immediately and treat all previously scanned pickle files as untrusted until re-scanned with the patched version.

Is GHSA-6vqj-c2q5-j97w actively exploited?

No confirmed active exploitation of GHSA-6vqj-c2q5-j97w has been reported, but organizations should still patch proactively.

How to fix GHSA-6vqj-c2q5-j97w?

1. Upgrade picklescan to >= 0.0.29 immediately across all environments. 2. Re-scan any pickle files previously cleared by older versions — the clean scan is no longer trustworthy. 3. Migrate PyTorch models to safetensors format where possible (eliminates pickle deserialization risk entirely). 4. Implement defense-in-depth: never rely solely on picklescan — run models in isolated containers or sandboxes before loading in production. 5. Audit model ingestion pipelines for any automated load-after-scan patterns. 6. For detection: monitor for unexpected process spawns or network connections originating from Python processes that load model files.

What systems are affected by GHSA-6vqj-c2q5-j97w?

This vulnerability affects the following AI/ML architecture patterns: training pipelines, model serving, MLOps pipelines, model registries, data science notebooks.

What is the CVSS score for GHSA-6vqj-c2q5-j97w?

No CVSS score has been assigned yet.

Technical Details

NVD Description

### Summary Using profile.Profile.runctx, which is a built-in python library function to execute remote pickle file. ### Details The attack payload executes in the following steps: First, the attacker craft the payload by calling to profile.Profile.runctx function in reduce method Then when the victim after checking whether the pickle file is safe by using Picklescan library and this library doesn't dectect any dangerous functions, decide to pickle.load() this malicious pickle file, thus lead to remote code execution. ### PoC ``` class EvilProfileRunctx: def __reduce__(self): from profile import Profile payload = "__import__('os').system('whoami')" return Profile.runctx, (Profile(), payload, {}, {}) ``` ### Impact Who is impacted? Any organization or individual relying on picklescan to detect malicious pickle files inside PyTorch models. What is the impact? Attackers can embed malicious code in pickle file that remains undetected but executes when the pickle file is loaded. Supply Chain Attack: Attackers can distribute infected pickle files across ML models, APIs, or saved Python objects. ### Corresponding https://github.com/FredericDT https://github.com/Qhaoduoyu

Exploitation Scenario

An adversary targeting an ML team crafts a malicious pickle file using profile.Profile.runctx to hide the RCE payload — the standard dangerous functions (os.system, subprocess) are absent, so picklescan passes it as safe. The attacker uploads the file to a shared model registry, sends it via a dependency update, or embeds it in a Hugging Face-style model repository. The victim's CI/CD pipeline or data scientist runs picklescan (passes), then loads the model with torch.load() or pickle.load(). The payload executes with the privileges of the ML process — enabling data exfiltration, lateral movement, or persistent access into the ML infrastructure.

Timeline

Published
August 26, 2025
Last Modified
August 26, 2025
First Seen
March 24, 2026

Related Vulnerabilities