GHSA-g344-hcph-8vgg: picklescan: scanner bypass enables RCE in ML pipelines

GHSA-g344-hcph-8vgg MEDIUM
Published August 26, 2025
CISO Take

If your ML pipeline uses picklescan to gate model loading, that control can be bypassed. Attackers can embed OS-level commands in pickle files using Python's built-in trace.Trace.runctx—picklescan < 0.0.29 reports them as clean. Patch to 0.0.29 immediately and adopt defense-in-depth: sandboxed model loading plus safetensors as your default serialization format.

Risk Assessment

Formally rated medium, but contextual risk is HIGH for any organization using picklescan as a security gate. The bypass exploits a Python built-in that picklescan's detection logic missed, creating a false sense of security—defenders believed they were protected while remaining fully exposed. Exposure is broad: any CI/CD pipeline, inference server, or data science environment that loads externally-sourced pickle files and relies on picklescan for validation is directly vulnerable. The exploit is trivial to implement and requires no specialized ML knowledge.

Affected Systems

Package Ecosystem Vulnerable Range Patched
picklescan pip < 0.0.29 0.0.29
402 3 dependents Pushed 2mo ago 95% patched ~12d to patch Full package profile →

Do you use picklescan? You're affected.

Severity & Risk

CVSS 3.1
N/A
EPSS
N/A
Exploitation Status
No known exploitation
Sophistication
Moderate

Recommended Action

5 steps
  1. Immediate: Upgrade picklescan to >= 0.0.29 (commit aecd11be98702caa9ba9b12189d91ad596a36114).

  2. Architectural: Migrate from pickle to safetensors format for model storage—safetensors prevents arbitrary code execution by design.

  3. Defense-in-depth: Run model loading in sandboxed environments (containers with no network, restricted syscalls via seccomp or AppArmor).

  4. Audit: Inventory all CI/CD pipelines and inference servers using picklescan; verify version.

  5. Supply chain hygiene: Only load models from trusted, cryptographically signed sources; implement model provenance verification before production deployment.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Art. 9 - Risk Management System
ISO 42001
A.6.1.6 - AI system supply chain
NIST AI RMF
MANAGE 2.2 - Treatments, responses, and recovery plans for AI risks
OWASP LLM Top 10
LLM03:2025 - Supply Chain Vulnerabilities

Frequently Asked Questions

What is GHSA-g344-hcph-8vgg?

If your ML pipeline uses picklescan to gate model loading, that control can be bypassed. Attackers can embed OS-level commands in pickle files using Python's built-in trace.Trace.runctx—picklescan < 0.0.29 reports them as clean. Patch to 0.0.29 immediately and adopt defense-in-depth: sandboxed model loading plus safetensors as your default serialization format.

Is GHSA-g344-hcph-8vgg actively exploited?

No confirmed active exploitation of GHSA-g344-hcph-8vgg has been reported, but organizations should still patch proactively.

How to fix GHSA-g344-hcph-8vgg?

1. Immediate: Upgrade picklescan to >= 0.0.29 (commit aecd11be98702caa9ba9b12189d91ad596a36114). 2. Architectural: Migrate from pickle to safetensors format for model storage—safetensors prevents arbitrary code execution by design. 3. Defense-in-depth: Run model loading in sandboxed environments (containers with no network, restricted syscalls via seccomp or AppArmor). 4. Audit: Inventory all CI/CD pipelines and inference servers using picklescan; verify version. 5. Supply chain hygiene: Only load models from trusted, cryptographically signed sources; implement model provenance verification before production deployment.

What systems are affected by GHSA-g344-hcph-8vgg?

This vulnerability affects the following AI/ML architecture patterns: training pipelines, model serving, MLOps pipelines, model registries, data science workflows.

What is the CVSS score for GHSA-g344-hcph-8vgg?

No CVSS score has been assigned yet.

Technical Details

NVD Description

### Summary Using trace.Trace.runctx, which is a built-in python library function to execute remote pickle file. ### Details The attack payload executes in the following steps: First, the attacker craft the payload by calling to trace.Trace.runctx function in reduce method Then when the victim after checking whether the pickle file is safe by using Picklescan library and this library doesn't dectect any dangerous functions, decide to pickle.load() this malicious pickle file, thus lead to remote code execution. ### PoC ``` class EvilTraceRunctx: def __reduce__(self): from trace import Trace payload = "__import__('os').system('whoami')" return Trace.runctx, (Trace(), payload, {}, {}) ``` ### Impact Who is impacted? Any organization or individual relying on picklescan to detect malicious pickle files inside PyTorch models. What is the impact? Attackers can embed malicious code in pickle file that remains undetected but executes when the pickle file is loaded. Supply Chain Attack: Attackers can distribute infected pickle files across ML models, APIs, or saved Python objects. ### Corresponding https://github.com/FredericDT https://github.com/Qhaoduoyu

Exploitation Scenario

An adversary publishes a malicious PyTorch model to HuggingFace Hub or shares it via a third-party ML repository. The pickle payload uses trace.Trace.runctx inside __reduce__ to execute arbitrary OS commands—e.g., dropping a reverse shell or exfiltrating API keys from the environment. A victim organization downloads the model as part of a normal research or fine-tuning workflow, runs picklescan as the security gate (it reports clean), then calls torch.load() triggering full RCE. From there the attacker pivots to internal systems, steals model weights and training data, or compromises cloud credentials. This is a near-ideal supply chain vector: low attacker skill, broad victim surface, trusted tooling subverted.

Timeline

Published
August 26, 2025
Last Modified
August 26, 2025
First Seen
March 24, 2026

Related Vulnerabilities