GHSA-86cj-95qr-2p4f: picklescan: detection bypass enables PyTorch model RCE

GHSA-86cj-95qr-2p4f MEDIUM
Published August 22, 2025
CISO Take

If your ML pipeline uses picklescan to vet PyTorch model files before loading, your security control is broken — attackers can craft pickle files that pass the scan and execute arbitrary OS commands on load. Upgrade picklescan to 0.0.28 immediately and treat any model file validated with <=0.0.27 as untrusted. This is a false-negative in your defense layer, not a PyTorch bug — the exploitability is trivial with a public PoC.

Risk Assessment

Effective risk is higher than the medium CVSS suggests. The vulnerability lives in a security control (picklescan), so organizations relying on it have a false sense of safety. The PoC is public, the exploit is trivial to reproduce, and the payload bypasses the only commonly-used pickle safety scanner in ML workflows. Any org with a model intake process that accepts external files and validates with picklescan is directly exposed. Impact is full RCE on the model-loading host.

Affected Systems

Package Ecosystem Vulnerable Range Patched
picklescan pip <= 0.0.27 0.0.28
401 3 dependents Pushed 2mo ago 95% patched ~12d to patch Full package profile →

Do you use picklescan? You're affected.

Severity & Risk

CVSS 3.1
N/A
EPSS
N/A
Exploitation Status
No known exploitation
Sophistication
Trivial

Recommended Action

1 step
  1. 1) Upgrade picklescan to 0.0.28 immediately in all environments — this is the patch. 2) Audit all model files scanned with picklescan <=0.0.27 since 2025-08-22 and treat them as potentially compromised. 3) Implement defense-in-depth: do not rely on any single scanner; load untrusted models in sandboxed/ephemeral environments with no network or filesystem access. 4) Prefer safetensors over pickle for model serialization — it eliminates the entire attack class. 5) Add SIEM rule: alert on torch._dynamo.guards appearing in pickle files or scan logs. 6) For model registries: re-scan all cached models with the patched version.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Article 15 - Accuracy, robustness and cybersecurity Article 9 - Risk management system
ISO 42001
A.6.2 - AI risk assessment A.8.3 - AI system integrity
NIST AI RMF
GOVERN 1.7 - Processes for AI risk management MANAGE 2.2 - Mechanisms to sustain treatment of identified AI risks
OWASP LLM Top 10
LLM03 - Supply Chain Vulnerabilities

Frequently Asked Questions

What is GHSA-86cj-95qr-2p4f?

If your ML pipeline uses picklescan to vet PyTorch model files before loading, your security control is broken — attackers can craft pickle files that pass the scan and execute arbitrary OS commands on load. Upgrade picklescan to 0.0.28 immediately and treat any model file validated with <=0.0.27 as untrusted. This is a false-negative in your defense layer, not a PyTorch bug — the exploitability is trivial with a public PoC.

Is GHSA-86cj-95qr-2p4f actively exploited?

No confirmed active exploitation of GHSA-86cj-95qr-2p4f has been reported, but organizations should still patch proactively.

How to fix GHSA-86cj-95qr-2p4f?

1) Upgrade picklescan to 0.0.28 immediately in all environments — this is the patch. 2) Audit all model files scanned with picklescan <=0.0.27 since 2025-08-22 and treat them as potentially compromised. 3) Implement defense-in-depth: do not rely on any single scanner; load untrusted models in sandboxed/ephemeral environments with no network or filesystem access. 4) Prefer safetensors over pickle for model serialization — it eliminates the entire attack class. 5) Add SIEM rule: alert on `torch._dynamo.guards` appearing in pickle files or scan logs. 6) For model registries: re-scan all cached models with the patched version.

What systems are affected by GHSA-86cj-95qr-2p4f?

This vulnerability affects the following AI/ML architecture patterns: model serving, training pipelines, MLOps model registries, CI/CD model validation pipelines, supply chain / model distribution.

What is the CVSS score for GHSA-86cj-95qr-2p4f?

No CVSS score has been assigned yet.

Technical Details

NVD Description

### Summary Using torch._dynamo.guards.GuardBuilder.get function, which is a pytorch library function to execute remote pickle file. ### Details The attack payload executes in the following steps: First, the attacker craft the payload by calling to torch._dynamo.guards.GuardBuilder.get function in reduce method Then when the victim after checking whether the pickle file is safe by using Picklescan library and this library doesn't dectect any dangerous functions, decide to pickle.load() this malicious pickle file, thus lead to remote code execution. ### PoC ``` import types import torch._dynamo.guards as guards class EvilTorchDynamoGuardsGet: def __reduce__(self): fake_self = types.SimpleNamespace(scope={}) name = "__import__('os').system('whoami')" return guards.GuardBuilder.get, (fake_self, name) ``` ### Impact Who is impacted? Any organization or individual relying on picklescan to detect malicious pickle files inside PyTorch models. What is the impact? Attackers can embed malicious code in pickle file that remains undetected but executes when the pickle file is loaded. Supply Chain Attack: Attackers can distribute infected pickle files across ML models, APIs, or saved Python objects. ### Corresponding https://github.com/FredericDT https://github.com/Qhaoduoyu

Exploitation Scenario

An adversary targets an MLOps team that accepts community-contributed models. They publish a seemingly legitimate PyTorch model (e.g., a fine-tuned LLM adapter) to Hugging Face or a private registry. The model file contains a crafted `__reduce__` that invokes `torch._dynamo.guards.GuardBuilder.get` with an embedded shell command. The victim's intake pipeline runs picklescan on the file, receives a clean result, and promotes the model to staging. On first inference load, the malicious `__reduce__` executes — establishing a reverse shell or exfiltrating cloud credentials from the serving environment.

Timeline

Published
August 22, 2025
Last Modified
August 22, 2025
First Seen
March 24, 2026

Related Vulnerabilities