GHSA-8r4j-24qv-fmq9: picklescan: RCE bypass enables ML supply chain attack

GHSA-8r4j-24qv-fmq9 MEDIUM
Published August 26, 2025
CISO Take

Update picklescan to 0.0.29 immediately — prior versions fail to detect a novel RCE payload that abuses Python's built-in idlelib library, silently invalidating your pickle file security gate. Any ML pipeline that downloads and loads external models and relied on picklescan for validation should treat previously scanned files as potentially compromised. Longer term, migrate model serialization to safetensors to eliminate the pickle attack surface entirely.

Risk Assessment

The medium CVSSv3 rating significantly underrepresents operational risk in AI/ML environments. This vulnerability specifically defeats picklescan — a tool organizations deploy as a primary control against pickle-based attacks — creating a dangerous false sense of security. Teams that treat picklescan clearance as sufficient authorization to load models are fully exposed. Supply chain risk is elevated: a single poisoned model file on HuggingFace Hub, an internal registry, or a CI/CD artifact store could execute arbitrary code on any engineer workstation, pipeline runner, or inference server that loads it. No CVE CVSS yet, but exploitability is trivial once the bypass technique is known.

Affected Systems

Package Ecosystem Vulnerable Range Patched
picklescan pip < 0.0.29 0.0.29
401 3 dependents Pushed 2mo ago 95% patched ~12d to patch Full package profile →

Do you use picklescan? You're affected.

Severity & Risk

CVSS 3.1
N/A
EPSS
N/A
Exploitation Status
No known exploitation
Sophistication
Moderate

Recommended Action

6 steps
  1. PATCH

    Upgrade picklescan to >= 0.0.29 immediately (fix: commit aecd11b).

  2. AUDIT

    Re-scan all pickle files previously cleared by picklescan < 0.0.29 with the updated version.

  3. RESTRICT

    Limit pickle loading to models from verified, cryptographically signed internal sources only.

  4. HARDEN

    Adopt safetensors (HuggingFace) as the default model serialization format — eliminates the pickle attack surface entirely for model weights.

  5. DETECT

    Monitor for anomalous child processes (shells, curl, wget) spawned from Python processes running pickle.load() — indicates active exploitation.

  6. SANDBOX

    Run model loading in isolated containers with no network egress and restricted filesystem access as defense-in-depth.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Article 9 - Risk management system
ISO 42001
8.4 - AI supply chain management
NIST AI RMF
GOVERN 6.1 - AI Supply Chain and Third-Party Risk Management
OWASP LLM Top 10
LLM05:2025 - Insecure Plugin and Supply Chain

Frequently Asked Questions

What is GHSA-8r4j-24qv-fmq9?

Update picklescan to 0.0.29 immediately — prior versions fail to detect a novel RCE payload that abuses Python's built-in idlelib library, silently invalidating your pickle file security gate. Any ML pipeline that downloads and loads external models and relied on picklescan for validation should treat previously scanned files as potentially compromised. Longer term, migrate model serialization to safetensors to eliminate the pickle attack surface entirely.

Is GHSA-8r4j-24qv-fmq9 actively exploited?

No confirmed active exploitation of GHSA-8r4j-24qv-fmq9 has been reported, but organizations should still patch proactively.

How to fix GHSA-8r4j-24qv-fmq9?

1. PATCH: Upgrade picklescan to >= 0.0.29 immediately (fix: commit aecd11b). 2. AUDIT: Re-scan all pickle files previously cleared by picklescan < 0.0.29 with the updated version. 3. RESTRICT: Limit pickle loading to models from verified, cryptographically signed internal sources only. 4. HARDEN: Adopt safetensors (HuggingFace) as the default model serialization format — eliminates the pickle attack surface entirely for model weights. 5. DETECT: Monitor for anomalous child processes (shells, curl, wget) spawned from Python processes running pickle.load() — indicates active exploitation. 6. SANDBOX: Run model loading in isolated containers with no network egress and restricted filesystem access as defense-in-depth.

What systems are affected by GHSA-8r4j-24qv-fmq9?

This vulnerability affects the following AI/ML architecture patterns: Training pipelines, Model serving infrastructure, MLOps and CI/CD pipelines, Model registries, Data science workstations.

What is the CVSS score for GHSA-8r4j-24qv-fmq9?

No CVSS score has been assigned yet.

Technical Details

NVD Description

### Summary Using idlelib.calltip.Calltip.fetch_tip, which is a built-in python library function to execute remote pickle file. ### Details The attack payload executes in the following steps: First, the attacker craft the payload by calling to idlelib.calltip.Calltip.fetch_tip function in reduce method Then when the victim after checking whether the pickle file is safe by using Picklescan library and this library doesn't dectect any dangerous functions, decide to pickle.load() this malicious pickle file, thus lead to remote code execution. ### PoC ``` class EvilCalltipFetchTip: def __reduce__(self): from idlelib.calltip import Calltip # fetch_tip(expression) -> get_entity(expression) -> eval(expression) return Calltip().fetch_tip, ("__import__('os').system('whoami')",) ``` ### Impact Who is impacted? Any organization or individual relying on picklescan to detect malicious pickle files inside PyTorch models. What is the impact? Attackers can embed malicious code in pickle file that remains undetected but executes when the pickle file is loaded. Supply Chain Attack: Attackers can distribute infected pickle files across ML models, APIs, or saved Python objects. ### Corresponding https://github.com/FredericDT https://github.com/Qhaoduoyu

Exploitation Scenario

An attacker uploads a malicious PyTorch model to HuggingFace Hub or a public model registry. The file contains a pickle payload using idlelib.calltip.Calltip.fetch_tip in its __reduce__ method — a Python built-in that internally calls eval(), executing arbitrary OS commands. An ML engineer discovers the model, runs picklescan for safety validation, and receives a CLEAN result. Trusting the tool, the engineer loads the model locally or triggers it through a CI/CD pipeline. RCE fires: the attacker gains a shell in the engineer's environment or production inference infrastructure, with potential access to model weights, training datasets, API keys, and lateral movement into internal systems. The entire attack chain requires no zero-day and is replicable by any attacker who reads the public advisory.

Timeline

Published
August 26, 2025
Last Modified
August 26, 2025
First Seen
March 24, 2026

Related Vulnerabilities