GHSA-vr7h-p6mm-wpmh: picklescan: PyTorch gadget bypasses pickle RCE detection

GHSA-vr7h-p6mm-wpmh MEDIUM
Published August 22, 2025
CISO Take

picklescan versions <=0.0.27 fail to detect RCE payloads that use torch.jit.unsupported_tensor_ops.execWrapper as an execution primitive, giving a false clean signal on malicious model files. Any ML pipeline that uses picklescan as a security gate before loading .pkl/.pt/.pth files must update to v0.0.28 immediately and re-validate previously cleared artifacts. The real danger is organizational: teams that believed picklescan provided sufficient protection should now audit their model ingestion workflows and add defense-in-depth beyond scanner-only approaches.

Risk Assessment

CVSS is unscored but practical risk is high for organizations using picklescan as their primary defense against malicious pickle files. Exploit complexity is trivial—the PoC is 8 lines of Python requiring only PyTorch installed. Attack surface is broad: any org loading PyTorch model files from external sources (Hugging Face Hub, S3, CI/CD artifact stores) and relying on picklescan for clearance. The false sense of security is the primary amplifier: a 'clean' scan result actively encourages loading a malicious file.

Affected Systems

Package Ecosystem Vulnerable Range Patched
picklescan pip <= 0.0.27 0.0.28
402 3 dependents Pushed 2mo ago 95% patched ~12d to patch Full package profile →

Do you use picklescan? You're affected.

Severity & Risk

CVSS 3.1
N/A
EPSS
N/A
Exploitation Status
No known exploitation
Sophistication
Trivial

Recommended Action

6 steps
  1. Update picklescan to v0.0.28+ immediately—this version adds execWrapper to the blocklist.

  2. Re-scan all model files previously cleared by older picklescan versions; do not treat prior scans as authoritative.

  3. Migrate model serialization to safetensors format (no code execution on load by design)—this eliminates the entire attack class.

  4. Never run pickle.load() on model files without sandbox isolation (e.g., restricted subprocess, gVisor, Firecracker).

  5. Enable detection via EDR/HIDS: watch for Python processes spawning unexpected child processes (os.system, subprocess) during model loading.

  6. Audit model provenance: prefer cryptographically signed artifacts from controlled registries over anonymous public uploads.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Art.17 - Quality Management System — supply chain controls
ISO 42001
A.6.2 - AI system supply chain management
NIST AI RMF
GOVERN-6.1 - AI supply chain and third-party risk management MANAGE-2.2 - Risk treatment for third-party AI artifacts
OWASP LLM Top 10
LLM03 - Supply Chain Vulnerabilities

Frequently Asked Questions

What is GHSA-vr7h-p6mm-wpmh?

picklescan versions <=0.0.27 fail to detect RCE payloads that use torch.jit.unsupported_tensor_ops.execWrapper as an execution primitive, giving a false clean signal on malicious model files. Any ML pipeline that uses picklescan as a security gate before loading .pkl/.pt/.pth files must update to v0.0.28 immediately and re-validate previously cleared artifacts. The real danger is organizational: teams that believed picklescan provided sufficient protection should now audit their model ingestion workflows and add defense-in-depth beyond scanner-only approaches.

Is GHSA-vr7h-p6mm-wpmh actively exploited?

No confirmed active exploitation of GHSA-vr7h-p6mm-wpmh has been reported, but organizations should still patch proactively.

How to fix GHSA-vr7h-p6mm-wpmh?

1. Update picklescan to v0.0.28+ immediately—this version adds execWrapper to the blocklist. 2. Re-scan all model files previously cleared by older picklescan versions; do not treat prior scans as authoritative. 3. Migrate model serialization to safetensors format (no code execution on load by design)—this eliminates the entire attack class. 4. Never run pickle.load() on model files without sandbox isolation (e.g., restricted subprocess, gVisor, Firecracker). 5. Enable detection via EDR/HIDS: watch for Python processes spawning unexpected child processes (os.system, subprocess) during model loading. 6. Audit model provenance: prefer cryptographically signed artifacts from controlled registries over anonymous public uploads.

What systems are affected by GHSA-vr7h-p6mm-wpmh?

This vulnerability affects the following AI/ML architecture patterns: model serving, training pipelines, model registries, MLOps pipelines, CI/CD model validation gates.

What is the CVSS score for GHSA-vr7h-p6mm-wpmh?

No CVSS score has been assigned yet.

Technical Details

NVD Description

### Summary Using torch.jit.unsupported_tensor_ops.execWrapper function, which is a pytorch library function to execute remote pickle file. ### Details The attack payload executes in the following steps: First, the attacker craft the payload by calling to torch.jit.unsupported_tensor_ops.execWrapper function in reduce method Then when the victim after checking whether the pickle file is safe by using Picklescan library and this library doesn't dectect any dangerous functions, decide to pickle.load() this malicious pickle file, thus lead to remote code execution. ### PoC ``` import torch.jit.unsupported_tensor_ops as unsupported_tensor_ops class EvilTorchJitUnsupportedTensorOpsExecWrapper: def __reduce__(self): code = '__import__("os").system("whoami")' glob = {} loc = {} return unsupported_tensor_ops.execWrapper, (code, glob, loc) ``` ### Impact Who is impacted? Any organization or individual relying on picklescan to detect malicious pickle files inside PyTorch models. What is the impact? Attackers can embed malicious code in pickle file that remains undetected but executes when the pickle file is loaded. Supply Chain Attack: Attackers can distribute infected pickle files across ML models, APIs, or saved Python objects. ### Corresponding https://github.com/FredericDT https://github.com/Qhaoduoyu

Exploitation Scenario

An adversary uploads a PyTorch model file to a public hub (Hugging Face, ONNX Model Zoo mirror, GitHub release). The file's pickle payload uses EvilTorchJitUnsupportedTensorOpsExecWrapper.__reduce__ to return execWrapper with arbitrary Python code. A victim organization's MLOps pipeline fetches the model, runs picklescan—which returns clean—and proceeds to load the model with pickle.load(). On load, the payload executes with the privileges of the model-loading process: an attacker can exfiltrate environment variables (API keys, cloud credentials), establish a reverse shell, or implant a persistent backdoor in the model serving environment. This is particularly dangerous in shared ML platforms where multiple teams load models from shared artifact stores.

Timeline

Published
August 22, 2025
Last Modified
August 22, 2025
First Seen
March 24, 2026

Related Vulnerabilities