GHSA-49gj-c84q-6qm9: picklescan: scanner bypass enables RCE via ML model files

GHSA-49gj-c84q-6qm9 MEDIUM
Published August 26, 2025
CISO Take

If your team uses picklescan to gate PyTorch model ingestion, you have a false sense of security — attackers can embed arbitrary code using cProfile.run() that passes the scan undetected. Update picklescan to 0.0.30 immediately and treat any pickle file scanned with a prior version as unverified. Longer term, mandate safetensors format for model serialization and never rely on a single scanner as your sole control for untrusted model loading.

Risk Assessment

While CVSS is unscored, operational risk is HIGH for organizations ingesting external ML models. The exploit is trivial to craft (3-line PoC), requires no special access, and defeats the primary security control teams trust to validate pickle files. The blast radius extends to any pipeline — CI/CD, model registry, inference serving — that calls pickle.load() after a picklescan pass. The false assurance makes this worse than having no scanner: teams believe they are protected.

Affected Systems

Package Ecosystem Vulnerable Range Patched
picklescan pip < 0.0.30 0.0.30
401 3 dependents Pushed 2mo ago 95% patched ~12d to patch Full package profile →

Do you use picklescan? You're affected.

Severity & Risk

CVSS 3.1
N/A
EPSS
N/A
Exploitation Status
No known exploitation
Sophistication
Trivial

Recommended Action

6 steps
  1. PATCH

    Upgrade picklescan to >= 0.0.30 immediately — this is the only definitive fix.

  2. INVENTORY

    Audit all pickle files scanned with versions < 0.0.30 and re-scan or re-validate them.

  3. FORMAT

    Migrate PyTorch model serialization to safetensors (torch.save with safe_serialization=True in HuggingFace) — eliminates pickle deserialization risk entirely.

  4. SANDBOX

    Load untrusted pickle files in an isolated environment (container, VM, restricted subprocess) with no network access and limited filesystem scope.

  5. DEFENSE-IN-DEPTH: Never use a single scanner as a binary gate; combine picklescan with hash verification of known-good models and strict model provenance controls.

  6. DETECT

    Monitor for unexpected network connections or subprocess spawns during model load operations.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Article 15 - Accuracy, robustness and cybersecurity
ISO 42001
A.6.2 - AI system supply chain management
NIST AI RMF
GOVERN 1.7 - Processes and procedures are in place for decommissioning and phasing out AI systems MANAGE 2.2 - Mechanisms are in place to inventory AI risks
OWASP LLM Top 10
LLM05:2025 - Supply Chain Vulnerabilities

Frequently Asked Questions

What is GHSA-49gj-c84q-6qm9?

If your team uses picklescan to gate PyTorch model ingestion, you have a false sense of security — attackers can embed arbitrary code using cProfile.run() that passes the scan undetected. Update picklescan to 0.0.30 immediately and treat any pickle file scanned with a prior version as unverified. Longer term, mandate safetensors format for model serialization and never rely on a single scanner as your sole control for untrusted model loading.

Is GHSA-49gj-c84q-6qm9 actively exploited?

No confirmed active exploitation of GHSA-49gj-c84q-6qm9 has been reported, but organizations should still patch proactively.

How to fix GHSA-49gj-c84q-6qm9?

1. PATCH: Upgrade picklescan to >= 0.0.30 immediately — this is the only definitive fix. 2. INVENTORY: Audit all pickle files scanned with versions < 0.0.30 and re-scan or re-validate them. 3. FORMAT: Migrate PyTorch model serialization to safetensors (torch.save with safe_serialization=True in HuggingFace) — eliminates pickle deserialization risk entirely. 4. SANDBOX: Load untrusted pickle files in an isolated environment (container, VM, restricted subprocess) with no network access and limited filesystem scope. 5. DEFENSE-IN-DEPTH: Never use a single scanner as a binary gate; combine picklescan with hash verification of known-good models and strict model provenance controls. 6. DETECT: Monitor for unexpected network connections or subprocess spawns during model load operations.

What systems are affected by GHSA-49gj-c84q-6qm9?

This vulnerability affects the following AI/ML architecture patterns: ML model serving, Training pipelines, MLOps CI/CD pipelines, Model registries, Agent frameworks with tool/plugin model loading.

What is the CVSS score for GHSA-49gj-c84q-6qm9?

No CVSS score has been assigned yet.

Technical Details

NVD Description

### Summary Using cProfile.run function, which is a built-in python library function to execute remote pickle file. ### Details The attack payload executes in the following steps: First, the attacker craft the payload by calling to cProfile.run function in reduce method Then when the victim after checking whether the pickle file is safe by using Picklescan library and this library doesn't dectect any dangerous functions, decide to pickle.load() this malicious pickle file, thus lead to remote code execution. ### PoC ``` import cProfile class EvilCProfileRun: def __reduce__(self): # cProfile.run(statement) -> Profile().run(statement) -> exec(statement) return cProfile.run, ("__import__('os').system('whoami')",) ``` ### Impact Who is impacted? Any organization or individual relying on picklescan to detect malicious pickle files inside PyTorch models. What is the impact? Attackers can embed malicious code in pickle file that remains undetected but executes when the pickle file is loaded. Supply Chain Attack: Attackers can distribute infected pickle files across ML models, APIs, or saved Python objects. ### Corresponding https://github.com/FredericDT https://github.com/Qhaoduoyu

Exploitation Scenario

An adversary crafts a poisoned PyTorch model file embedding a cProfile.run()-based payload in the __reduce__ method. They publish it to a public Hugging Face Hub repository or model sharing platform. A victim organization's CI/CD pipeline pulls the model, runs picklescan for validation — the scan passes cleanly because cProfile.run is not in picklescan's blocklist. The pipeline proceeds to call torch.load() in the model serving container, triggering remote code execution. The adversary now has a shell in the inference environment with access to model weights, API keys in environment variables, and potentially internal network access to downstream data stores. This is a classic supply chain attack with an extra layer of false legitimacy granted by the compromised scanner.

Timeline

Published
August 26, 2025
Last Modified
August 26, 2025
First Seen
March 24, 2026

Related Vulnerabilities