GHSA-cj3c-v495-4xqh: picklescan: security bypass enables RCE in ML pipelines

GHSA-cj3c-v495-4xqh MEDIUM
Published August 26, 2025
CISO Take

If your ML pipeline uses picklescan to validate PyTorch or other pickle-based models before loading, your defense is bypassed. Attackers can distribute malicious models that pass picklescan checks yet execute arbitrary code on load. Upgrade to picklescan 0.0.29 immediately and treat any pickle-sourced model as untrusted until rescanned with the patched version.

Risk Assessment

Despite a medium CVSS designation, operational risk is HIGH for ML-heavy organizations. The criticality stems from false confidence: teams that adopted picklescan as their primary pickle safety gate are fully exposed. Pickle deserialization RCE is trivially weaponizable and the bypass PoC is public. Attack surface is broad — any CI/CD pipeline, model registry, or data science workflow that loads third-party or community-sourced PyTorch models is affected. No authentication or network access is required; a poisoned model file is the entire attack vector.

Affected Systems

Package Ecosystem Vulnerable Range Patched
picklescan pip < 0.0.29 0.0.29
401 3 dependents Pushed 2mo ago 95% patched ~12d to patch Full package profile →

Do you use picklescan? You're affected.

Severity & Risk

CVSS 3.1
N/A
EPSS
N/A
Exploitation Status
No known exploitation
Sophistication
Moderate

Recommended Action

5 steps
  1. PATCH

    Upgrade picklescan to >= 0.0.29 immediately across all environments where it is installed.

  2. RESCAN

    Re-validate all pickle-based model files ingested since picklescan was first deployed — prior 'clean' verdicts are untrustworthy.

  3. DEFENSE-IN-DEPTH: Do not rely on a single scanning tool for pickle safety. Complement picklescan with safetensors format adoption (pickle-free), model signing/verification, and sandboxed model loading environments.

  4. DETECT

    Search codebases and model artifacts for code.InteractiveInterpreter usage patterns. Add detection rules in CI pipelines for this import.

  5. POLICY

    Mandate that all imported models pass scan with the patched version and are loaded in isolated environments (containers, VMs) before promotion to production.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Article 15 - Accuracy, robustness and cybersecurity Article 9 - Risk management system for high-risk AI
ISO 42001
A.6.1.4 - AI system supply chain risk management A.8.4 - AI system integrity and verification
NIST AI RMF
GOVERN 1.7 - Processes for tracking AI risks MANAGE 2.2 - Mechanisms to sustain and monitor AI risk management
OWASP LLM Top 10
LLM03:2025 - Supply Chain Vulnerabilities

Frequently Asked Questions

What is GHSA-cj3c-v495-4xqh?

If your ML pipeline uses picklescan to validate PyTorch or other pickle-based models before loading, your defense is bypassed. Attackers can distribute malicious models that pass picklescan checks yet execute arbitrary code on load. Upgrade to picklescan 0.0.29 immediately and treat any pickle-sourced model as untrusted until rescanned with the patched version.

Is GHSA-cj3c-v495-4xqh actively exploited?

No confirmed active exploitation of GHSA-cj3c-v495-4xqh has been reported, but organizations should still patch proactively.

How to fix GHSA-cj3c-v495-4xqh?

1. PATCH: Upgrade picklescan to >= 0.0.29 immediately across all environments where it is installed. 2. RESCAN: Re-validate all pickle-based model files ingested since picklescan was first deployed — prior 'clean' verdicts are untrustworthy. 3. DEFENSE-IN-DEPTH: Do not rely on a single scanning tool for pickle safety. Complement picklescan with safetensors format adoption (pickle-free), model signing/verification, and sandboxed model loading environments. 4. DETECT: Search codebases and model artifacts for `code.InteractiveInterpreter` usage patterns. Add detection rules in CI pipelines for this import. 5. POLICY: Mandate that all imported models pass scan with the patched version and are loaded in isolated environments (containers, VMs) before promotion to production.

What systems are affected by GHSA-cj3c-v495-4xqh?

This vulnerability affects the following AI/ML architecture patterns: training pipelines, model serving, MLOps pipelines, model registries, data science workstations.

What is the CVSS score for GHSA-cj3c-v495-4xqh?

No CVSS score has been assigned yet.

Technical Details

NVD Description

### Summary Using code.InteractiveInterpreter.runcode, which is a built-in python library function to execute remote pickle file. ### Details The attack payload executes in the following steps: First, the attacker craft the payload by calling to code.InteractiveInterpreter.runcode function in reduce method Then when the victim after checking whether the pickle file is safe by using Picklescan library and this library doesn't dectect any dangerous functions, decide to pickle.load() this malicious pickle file, thus lead to remote code execution. ### PoC ``` class EvilCodeRuncode: def __reduce__(self): from code import InteractiveInterpreter # InteractiveInterpreter().runcode(cmd) -> exec(cmd) return InteractiveInterpreter().runcode, ("__import__('os').system('whoami')",) ``` ### Impact Who is impacted? Any organization or individual relying on picklescan to detect malicious pickle files inside PyTorch models. What is the impact? Attackers can embed malicious code in pickle file that remains undetected but executes when the pickle file is loaded. Supply Chain Attack: Attackers can distribute infected pickle files across ML models, APIs, or saved Python objects. ### Corresponding https://github.com/FredericDT https://github.com/Qhaoduoyu

Exploitation Scenario

A threat actor publishes a poisoned PyTorch model to Hugging Face or a public S3 bucket. The model is designed to appear legitimate — correct architecture, plausible weights, valid metadata. Embedded in the model's pickle data is a `__reduce__` method that calls `code.InteractiveInterpreter().runcode` with an OS command payload. A data scientist or automated MLOps pipeline downloads the model, runs it through picklescan (which reports it clean), and calls `torch.load()`. The payload executes with the privileges of the loading process — establishing persistence, exfiltrating secrets, or pivoting to training infrastructure. The organization has no indication of compromise because their approved security control reported the model as safe.

Timeline

Published
August 26, 2025
Last Modified
August 26, 2025
First Seen
March 24, 2026

Related Vulnerabilities