GHSA-x696-vm39-cp64: picklescan: scan bypass allows RCE in ML pipelines

GHSA-x696-vm39-cp64 MEDIUM
Published August 26, 2025
CISO Take

picklescan is widely used as the primary security gate before loading PyTorch and other pickle-based model files — this bypass nullifies that control entirely. Any pipeline that loads pickle files after a picklescan 'clean' result is currently unprotected against this attack vector. Patch to picklescan 0.0.29 immediately and treat all previously scanned pickle files loaded from untrusted sources as potentially compromised.

Risk Assessment

High risk for ML-heavy organizations despite the medium CVSS classification. The severity lies not in technical complexity but in the trust relationship: teams explicitly rely on picklescan to greenlight pickle loading, so a bypass transforms a security control into a false sense of safety. Exploitability is moderate — an attacker must know the bypass technique and distribute the crafted file — but the impact is full RCE on the machine executing the model load, which in MLOps contexts is often a high-privileged training or inference server.

Affected Systems

Package Ecosystem Vulnerable Range Patched
picklescan pip < 0.0.29 0.0.29
401 3 dependents Pushed 2mo ago 95% patched ~12d to patch Full package profile →

Do you use picklescan? You're affected.

Severity & Risk

CVSS 3.1
N/A
EPSS
N/A
Exploitation Status
No known exploitation
Sophistication
Moderate

Recommended Action

6 steps
  1. Patch: upgrade picklescan to >= 0.0.29 immediately.

  2. Audit: re-scan all pickle files that were previously cleared by older versions, especially those sourced externally.

  3. Defense-in-depth: do not rely solely on picklescan — add safetensors format as the default for model serialization where possible (safetensors does not use pickle).

  4. Sandboxing: execute pickle.load() in an isolated container or VM with no network access and restricted filesystem permissions.

  5. Detection: monitor for unexpected process spawning (e.g., os.system, subprocess calls) originating from Python interpreter processes that are loading model files.

  6. Supply chain controls: enforce model provenance verification (hash + signature) before loading, regardless of scan results.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Art.15 - Accuracy, robustness and cybersecurity for high-risk AI systems
ISO 42001
A.6.1.6 - AI supply chain security
NIST AI RMF
GOVERN 1.7 - Processes for decommissioning or updating AI systems MANAGE 2.4 - Residual risks and the limitations of AI system components
OWASP LLM Top 10
LLM05:2025 - Improper Output Handling / Supply Chain Vulnerabilities

Frequently Asked Questions

What is GHSA-x696-vm39-cp64?

picklescan is widely used as the primary security gate before loading PyTorch and other pickle-based model files — this bypass nullifies that control entirely. Any pipeline that loads pickle files after a picklescan 'clean' result is currently unprotected against this attack vector. Patch to picklescan 0.0.29 immediately and treat all previously scanned pickle files loaded from untrusted sources as potentially compromised.

Is GHSA-x696-vm39-cp64 actively exploited?

No confirmed active exploitation of GHSA-x696-vm39-cp64 has been reported, but organizations should still patch proactively.

How to fix GHSA-x696-vm39-cp64?

1. Patch: upgrade picklescan to >= 0.0.29 immediately. 2. Audit: re-scan all pickle files that were previously cleared by older versions, especially those sourced externally. 3. Defense-in-depth: do not rely solely on picklescan — add safetensors format as the default for model serialization where possible (safetensors does not use pickle). 4. Sandboxing: execute pickle.load() in an isolated container or VM with no network access and restricted filesystem permissions. 5. Detection: monitor for unexpected process spawning (e.g., os.system, subprocess calls) originating from Python interpreter processes that are loading model files. 6. Supply chain controls: enforce model provenance verification (hash + signature) before loading, regardless of scan results.

What systems are affected by GHSA-x696-vm39-cp64?

This vulnerability affects the following AI/ML architecture patterns: model serving, training pipelines, MLOps platforms, model registries.

What is the CVSS score for GHSA-x696-vm39-cp64?

No CVSS score has been assigned yet.

Technical Details

NVD Description

### Summary Using profile.Profile.run, which is a built-in python library function to execute remote pickle file. ### Details The attack payload executes in the following steps: First, the attacker craft the payload by calling to profile.Profile.run function in reduce method Then when the victim after checking whether the pickle file is safe by using Picklescan library and this library doesn't dectect any dangerous functions, decide to pickle.load() this malicious pickle file, thus lead to remote code execution. ### PoC ``` class EvilProfileRun: def __reduce__(self): from profile import Profile payload = "__import__('os').system('whoami')" return Profile.run, (Profile(), payload) ``` ### Impact Who is impacted? Any organization or individual relying on picklescan to detect malicious pickle files inside PyTorch models. What is the impact? Attackers can embed malicious code in pickle file that remains undetected but executes when the pickle file is loaded. Supply Chain Attack: Attackers can distribute infected pickle files across ML models, APIs, or saved Python objects. ### Corresponding https://github.com/FredericDT https://github.com/Qhaoduoyu

Exploitation Scenario

An attacker publishes a poisoned PyTorch model to a public model repository (Hugging Face, GitHub, internal model registry). The model file contains a crafted __reduce__ method that uses profile.Profile.run to execute an arbitrary OS command. A data scientist or automated MLOps pipeline downloads the model and runs picklescan before loading — picklescan returns clean because it does not flag Profile.run as dangerous. The pipeline calls pickle.load() on the 'verified' file. The payload executes: in the simplest case whoami, but in a weaponized version it establishes a reverse shell, exfiltrates API keys and training data from the environment, or installs a persistent backdoor in the Python environment used by the training cluster.

Timeline

Published
August 26, 2025
Last Modified
August 26, 2025
First Seen
March 24, 2026

Related Vulnerabilities