GHSA-4whj-rm5r-c2v8: picklescan: scanner bypass enables PyTorch gadget RCE

GHSA-4whj-rm5r-c2v8 MEDIUM
Published August 26, 2025
CISO Take

picklescan, widely used to gate ML model files before loading, fails to detect payloads using torch.utils.bottleneck as an RCE gadget—giving teams false confidence in their validation pipeline. If your MLOps workflow relies on picklescan as the primary safety control for pickle files, that control is broken for this vector. Upgrade to 0.0.30 immediately and move toward defense-in-depth with sandboxed loading and safetensors adoption.

Risk Assessment

High operational risk for ML teams using picklescan as a security gate. The vulnerability undermines trust in a widely-deployed safety mechanism—organizations may have false confidence in model validation while remaining fully exposed. Exploitability is moderate: requires PyTorch internals knowledge, but the PoC is public and simple. Real-world exposure is significant given how ubiquitous the pickle format is for PyTorch model weights, optimizer states, and cached objects. No CISA KEV, but supply chain attack potential elevates urgency.

Affected Systems

Package Ecosystem Vulnerable Range Patched
picklescan pip < 0.0.30 0.0.30
401 3 dependents Pushed 2mo ago 95% patched ~12d to patch Full package profile →

Do you use picklescan? You're affected.

Severity & Risk

CVSS 3.1
N/A
EPSS
N/A
Exploitation Status
No known exploitation
Sophistication
Moderate

Recommended Action

1 step
  1. 1) Upgrade picklescan to 0.0.30 immediately—the patch adds detection for the run_autograd_prof gadget. 2) Assume all prior scans with < 0.0.30 are unreliable; re-scan cached models and flag any loaded since the advisory date. 3) Migrate PyTorch model weights to safetensors format—it is structurally immune to pickle deserialization attacks. 4) Load untrusted models in isolated sandboxes (containers with no network access, seccomp/AppArmor restrictions). 5) Enforce model provenance via cryptographic signing and allowlisting of trusted sources rather than relying solely on content scanning. 6) Detect suspicious process spawning from model-loading processes as an indicator of compromise.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Art. 9 - Risk Management System
ISO 42001
A.6.2.3 - AI supply chain management
NIST AI RMF
GOVERN 6.1 - Third-Party AI Risk Management
OWASP LLM Top 10
LLM03 - Supply Chain

Frequently Asked Questions

What is GHSA-4whj-rm5r-c2v8?

picklescan, widely used to gate ML model files before loading, fails to detect payloads using torch.utils.bottleneck as an RCE gadget—giving teams false confidence in their validation pipeline. If your MLOps workflow relies on picklescan as the primary safety control for pickle files, that control is broken for this vector. Upgrade to 0.0.30 immediately and move toward defense-in-depth with sandboxed loading and safetensors adoption.

Is GHSA-4whj-rm5r-c2v8 actively exploited?

No confirmed active exploitation of GHSA-4whj-rm5r-c2v8 has been reported, but organizations should still patch proactively.

How to fix GHSA-4whj-rm5r-c2v8?

1) Upgrade picklescan to 0.0.30 immediately—the patch adds detection for the run_autograd_prof gadget. 2) Assume all prior scans with < 0.0.30 are unreliable; re-scan cached models and flag any loaded since the advisory date. 3) Migrate PyTorch model weights to safetensors format—it is structurally immune to pickle deserialization attacks. 4) Load untrusted models in isolated sandboxes (containers with no network access, seccomp/AppArmor restrictions). 5) Enforce model provenance via cryptographic signing and allowlisting of trusted sources rather than relying solely on content scanning. 6) Detect suspicious process spawning from model-loading processes as an indicator of compromise.

What systems are affected by GHSA-4whj-rm5r-c2v8?

This vulnerability affects the following AI/ML architecture patterns: model serving, training pipelines, MLOps/CI-CD model validation, model registries.

What is the CVSS score for GHSA-4whj-rm5r-c2v8?

No CVSS score has been assigned yet.

Technical Details

NVD Description

### Summary Using torch.utils.bottleneck.\_\_main\_\_.run_autograd_prof function, which is a pytorch library function to execute remote pickle file. ### Details The attack payload executes in the following steps: First, the attacker craft the payload by calling to torch.utils.bottleneck.__main__.run_autograd_prof function in reduce method Then when the victim after checking whether the pickle file is safe by using Picklescan library and this library doesn't dectect any dangerous functions, decide to pickle.load() this malicious pickle file, thus lead to remote code execution. ### PoC ``` import torch.utils.bottleneck.__main__ as bottleneck_main class EvilTorchUtilsBottleneckRunAutogradProf: def __reduce__(self): code = '__import__("os").system("whoami")' globs = {} return bottleneck_main.run_autograd_prof, (code, globs) ``` ### Impact Who is impacted? Any organization or individual relying on picklescan to detect malicious pickle files inside PyTorch models. What is the impact? Attackers can embed malicious code in pickle file that remains undetected but executes when the pickle file is loaded. Supply Chain Attack: Attackers can distribute infected pickle files across ML models, APIs, or saved Python objects. ### Corresponding https://github.com/FredericDT https://github.com/Qhaoduoyu

Exploitation Scenario

An attacker uploads a malicious PyTorch model to a public registry (HuggingFace, GitHub) or injects it via a compromised internal artifact store. The victim organization's CI/CD pipeline invokes picklescan < 0.0.30, which reports the file as clean. The model is promoted to staging and loaded with torch.load(), triggering the __reduce__ gadget chain through run_autograd_prof, which evaluates arbitrary Python expressions. The attacker achieves RCE on the model server—potentially establishing reverse shell persistence, exfiltrating training data and model IP, or pivoting laterally through the ML infrastructure.

Timeline

Published
August 26, 2025
Last Modified
August 26, 2025
First Seen
March 24, 2026

Related Vulnerabilities