GHSA-4r9r-ch6f-vxmx: picklescan: PyTorch bypass allows undetected RCE

GHSA-4r9r-ch6f-vxmx MEDIUM
Published August 22, 2025
CISO Take

picklescan, the go-to tool for validating ML model files before loading, can be bypassed using a legitimate PyTorch utility function — giving your pipeline a false 'safe' signal before executing attacker code. Upgrade picklescan to 0.0.28 immediately and audit any models scanned with older versions from untrusted sources. This is particularly dangerous in MLOps pipelines that auto-load models post-scan.

Risk Assessment

MEDIUM severity by CVSS but HIGH practical impact. The vulnerability specifically undermines a security control (picklescan) that organizations deploy to gain confidence in model safety. Any org treating a clean picklescan result as a green light for model loading is now exposed. Exploitability is moderate — requires PyTorch knowledge and crafting a specific pickle payload, but a working PoC is publicly available. Blast radius is wide: any CI/CD pipeline, model repository, or inference stack that uses picklescan as its gating control.

Affected Systems

Package Ecosystem Vulnerable Range Patched
picklescan pip <= 0.0.27 0.0.28
401 3 dependents Pushed 2mo ago 95% patched ~12d to patch Full package profile →

Do you use picklescan? You're affected.

Severity & Risk

CVSS 3.1
N/A
EPSS
N/A
Exploitation Status
No known exploitation
Sophistication
Moderate

Recommended Action

6 steps
  1. PATCH

    Upgrade picklescan to 0.0.28+ immediately (pip install --upgrade picklescan).

  2. AUDIT

    Re-scan any PyTorch models that were cleared by picklescan <= 0.0.27, especially those from external or community sources.

  3. DEFENSE-IN-DEPTH: Never load pickle-format models as the sole validation step — add sandboxed execution environments (gVisor, Firecracker) for model deserialization.

  4. FORMAT MIGRATION

    Where possible, prefer SafeTensors format over pickle-based formats; SafeTensors is inherently safe from arbitrary code execution.

  5. DETECTION

    Monitor for unexpected process spawning (exec, system calls) from Python model-loading processes.

  6. POLICY

    Enforce signed model provenance for production pipelines — only load models from verified, hash-checked sources.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Article 17 - Quality management system Article 9 - Risk management system for high-risk AI
ISO 42001
A.6.2.6 - AI supply chain security
NIST AI RMF
GOVERN 6.1 - Policies for third-party AI risks MANAGE 2.2 - Risk response for AI supply chain
OWASP LLM Top 10
LLM03:2025 - Supply Chain Vulnerabilities

Frequently Asked Questions

What is GHSA-4r9r-ch6f-vxmx?

picklescan, the go-to tool for validating ML model files before loading, can be bypassed using a legitimate PyTorch utility function — giving your pipeline a false 'safe' signal before executing attacker code. Upgrade picklescan to 0.0.28 immediately and audit any models scanned with older versions from untrusted sources. This is particularly dangerous in MLOps pipelines that auto-load models post-scan.

Is GHSA-4r9r-ch6f-vxmx actively exploited?

No confirmed active exploitation of GHSA-4r9r-ch6f-vxmx has been reported, but organizations should still patch proactively.

How to fix GHSA-4r9r-ch6f-vxmx?

1. PATCH: Upgrade picklescan to 0.0.28+ immediately (pip install --upgrade picklescan). 2. AUDIT: Re-scan any PyTorch models that were cleared by picklescan <= 0.0.27, especially those from external or community sources. 3. DEFENSE-IN-DEPTH: Never load pickle-format models as the sole validation step — add sandboxed execution environments (gVisor, Firecracker) for model deserialization. 4. FORMAT MIGRATION: Where possible, prefer SafeTensors format over pickle-based formats; SafeTensors is inherently safe from arbitrary code execution. 5. DETECTION: Monitor for unexpected process spawning (exec, system calls) from Python model-loading processes. 6. POLICY: Enforce signed model provenance for production pipelines — only load models from verified, hash-checked sources.

What systems are affected by GHSA-4r9r-ch6f-vxmx?

This vulnerability affects the following AI/ML architecture patterns: model serving, training pipelines, MLOps/CI-CD pipelines, model registries, research environments.

What is the CVSS score for GHSA-4r9r-ch6f-vxmx?

No CVSS score has been assigned yet.

Technical Details

NVD Description

### Summary Using torch.utils.bottleneck.__main__.run_cprofile function, which is a pytorch library function to execute remote pickle file. ### Details The attack payload executes in the following steps: First, the attacker craft the payload by calling to torch.utils.bottleneck.__main__.run_cprofile function in reduce method Then when the victim after checking whether the pickle file is safe by using Picklescan library and this library doesn't dectect any dangerous functions, decide to pickle.load() this malicious pickle file, thus lead to remote code execution. ### PoC ``` import torch.utils.bottleneck.__main__ as bottleneck_main class EvilTorchUtilsBottleneckRunCprofile: def __reduce__(self): code = '__import__("os").system("whoami")' globs = {} return bottleneck_main.run_cprofile, (code, globs) ``` ### Impact Who is impacted? Any organization or individual relying on picklescan to detect malicious pickle files inside PyTorch models. What is the impact? Attackers can embed malicious code in pickle file that remains undetected but executes when the pickle file is loaded. Supply Chain Attack: Attackers can distribute infected pickle files across ML models, APIs, or saved Python objects. ### Corresponding https://github.com/FredericDT https://github.com/Qhaoduoyu

Exploitation Scenario

An adversary targets an organization's model ingestion pipeline. They craft a malicious PyTorch model file embedding the payload inside a custom __reduce__ method that calls torch.utils.bottleneck.__main__.run_cprofile with attacker-controlled code. The file is uploaded to a public model hub (HuggingFace, a shared S3 bucket, or submitted via pull request to an internal registry). The victim's automated pipeline runs picklescan — which returns CLEAN. The pipeline proceeds to load the model for evaluation or deployment. Upon pickle.load(), the OS command executes, giving the attacker RCE on the model server. From there: lateral movement to training infrastructure, data exfiltration, or persistent backdoor installation. The false-safe picklescan result may also appear in audit logs, delaying incident detection.

Timeline

Published
August 22, 2025
Last Modified
August 22, 2025
First Seen
March 24, 2026

Related Vulnerabilities