GHSA-3vg9-h568-4w9m: picklescan: RCE bypass via idlelib SetText evasion

GHSA-3vg9-h568-4w9m MEDIUM
Published August 26, 2025
CISO Take

picklescan is the primary security gate ML teams use to vet PyTorch and serialized model files before loading — this vulnerability lets a crafted pickle pass that scan clean and execute arbitrary OS commands. If your ML pipelines rely on picklescan to validate models from external sources, you are fully exposed to RCE with zero friction for the attacker. Patch to 0.0.29 immediately and treat any model loaded via picklescan < 0.0.29 from untrusted sources as potentially compromised.

Risk Assessment

Officially rated medium, but operational risk is HIGH for organizations using picklescan as a security control. The exploit defeats the only defense mechanism most ML teams apply before loading serialized models. A public PoC exists, the technique is novel (abusing Python stdlib idlelib to evade signature-based scanning), and exploitation requires only that the victim trusts picklescan's verdict — no additional attacker access needed. Blast radius includes any ML inference server, training environment, or CI/CD pipeline that loads external models.

Affected Systems

Package Ecosystem Vulnerable Range Patched
picklescan pip < 0.0.29 0.0.29
401 3 dependents Pushed 2mo ago 95% patched ~12d to patch Full package profile →

Do you use picklescan? You're affected.

Severity & Risk

CVSS 3.1
N/A
EPSS
N/A
Exploitation Status
No known exploitation
Sophistication
Moderate

Recommended Action

7 steps
  1. Upgrade picklescan to >= 0.0.29 immediately — patch is available.

  2. Audit all models loaded from external sources since last picklescan update; re-scan with patched version.

  3. Migrate to SafeTensors format for model storage and distribution — eliminates pickle deserialization RCE risk entirely.

  4. Never treat a single scanning tool as sufficient; layer controls (hash verification, provenance attestation, sandboxed loading).

  5. Sandbox model loading in isolated containers with no outbound network access and restricted OS capabilities.

  6. Allowlist verified model publishers; block loading of anonymous or unverified model artifacts.

  7. Detection: monitor production environments for unexpected imports of idlelib.debugobj or usage of ObjectTreeItem; alert on OS command execution from Python deserialization contexts.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Art. 9 - Risk management system
ISO 42001
8.4 - AI system supply chain management
NIST AI RMF
GOVERN 6.2 - Organizational risk tolerance for AI supply chain
OWASP LLM Top 10
LLM05 - Supply Chain Vulnerabilities

Frequently Asked Questions

What is GHSA-3vg9-h568-4w9m?

picklescan is the primary security gate ML teams use to vet PyTorch and serialized model files before loading — this vulnerability lets a crafted pickle pass that scan clean and execute arbitrary OS commands. If your ML pipelines rely on picklescan to validate models from external sources, you are fully exposed to RCE with zero friction for the attacker. Patch to 0.0.29 immediately and treat any model loaded via picklescan < 0.0.29 from untrusted sources as potentially compromised.

Is GHSA-3vg9-h568-4w9m actively exploited?

No confirmed active exploitation of GHSA-3vg9-h568-4w9m has been reported, but organizations should still patch proactively.

How to fix GHSA-3vg9-h568-4w9m?

1. Upgrade picklescan to >= 0.0.29 immediately — patch is available. 2. Audit all models loaded from external sources since last picklescan update; re-scan with patched version. 3. Migrate to SafeTensors format for model storage and distribution — eliminates pickle deserialization RCE risk entirely. 4. Never treat a single scanning tool as sufficient; layer controls (hash verification, provenance attestation, sandboxed loading). 5. Sandbox model loading in isolated containers with no outbound network access and restricted OS capabilities. 6. Allowlist verified model publishers; block loading of anonymous or unverified model artifacts. 7. Detection: monitor production environments for unexpected imports of `idlelib.debugobj` or usage of `ObjectTreeItem`; alert on OS command execution from Python deserialization contexts.

What systems are affected by GHSA-3vg9-h568-4w9m?

This vulnerability affects the following AI/ML architecture patterns: training pipelines, model serving, model registries, MLOps CI/CD pipelines.

What is the CVSS score for GHSA-3vg9-h568-4w9m?

No CVSS score has been assigned yet.

Technical Details

NVD Description

### Summary Using idlelib.debugobj.ObjectTreeItem.SetText, which is a built-in python library function to execute remote pickle file. ### Details The attack payload executes in the following steps: First, the attacker craft the payload by calling to idlelib.debugobj.ObjectTreeItem.SetText function in reduce method Then when the victim after checking whether the pickle file is safe by using Picklescan library and this library doesn't dectect any dangerous functions, decide to pickle.load() this malicious pickle file, thus lead to remote code execution. ### PoC ``` class EvilDebugobjSetText: def __reduce__(self): from idlelib.debugobj import ObjectTreeItem # ObjectTreeItem(..., setfunction=print).SetText(cmd) return ObjectTreeItem("label", None, print).SetText, ("__import__('os').system('whoami')",) ``` ### Impact Who is impacted? Any organization or individual relying on picklescan to detect malicious pickle files inside PyTorch models. What is the impact? Attackers can embed malicious code in pickle file that remains undetected but executes when the pickle file is loaded. Supply Chain Attack: Attackers can distribute infected pickle files across ML models, APIs, or saved Python objects. ### Corresponding https://github.com/FredericDT https://github.com/Qhaoduoyu

Exploitation Scenario

An attacker publishes a trojanized version of a popular open-source PyTorch model (e.g., a fine-tuned LLM or image classifier) to HuggingFace or GitHub. The model file contains the PoC payload in its `__reduce__` method, calling `idlelib.debugobj.ObjectTreeItem.SetText` with a reverse shell command. An ML engineer or automated pipeline downloads the model and runs picklescan — which returns clean. Trusting the verdict, they execute `torch.load()`, triggering RCE on the inference server or GPU training node. In a CI/CD variant, the poisoned model enters the artifact registry and propagates to production model serving, giving the attacker persistent access to inference infrastructure.

Timeline

Published
August 26, 2025
Last Modified
August 26, 2025
First Seen
March 24, 2026

Related Vulnerabilities