GHSA-f745-w6jp-hpxx: picklescan: RCE bypass via torch.utils.collect_env

GHSA-f745-w6jp-hpxx MEDIUM
Published August 22, 2025
CISO Take

picklescan, widely used to validate PyTorch model files before loading, fails to detect malicious payloads crafted with torch.utils.collect_env.run — giving teams a false sense of security. Any ML pipeline that downloads models from external sources and uses picklescan as the safety gate is fully exposed to supply chain RCE. Update picklescan to 0.0.28 immediately and adopt safetensors as the default model format going forward.

Risk Assessment

CVSS is unscored but operational risk is HIGH in AI/ML contexts. The vulnerability does not require network access or elevated privileges — it only requires a victim to load a pickle file that passed picklescan validation. With the PoC publicly available, weaponization requires zero expertise. Impact is full code execution on the host running the ML workload, which in cloud environments often has broad IAM permissions, access to training data, and model registries.

Affected Systems

Package Ecosystem Vulnerable Range Patched
picklescan pip <= 0.0.27 0.0.28
402 3 dependents Pushed 2mo ago 95% patched ~12d to patch Full package profile →

Do you use picklescan? You're affected.

Severity & Risk

CVSS 3.1
N/A
EPSS
N/A
Exploitation Status
No known exploitation
Sophistication
Trivial

Recommended Action

5 steps
  1. PATCH

    Upgrade picklescan to >= 0.0.28 immediately (PR #47 adds torch.utils.collect_env.run to the blocklist).

  2. VERIFY

    Audit all torch.utils.collect_env.run calls in scanned model files using: grep -r 'collect_env' on unpickled code or use modelscan as a secondary scanner.

  3. FORMAT MIGRATION

    Migrate from pickle-based .pt/.pth files to safetensors format — it is inherently safe and now the recommended format for HuggingFace models.

  4. DEFENSE IN DEPTH

    Never rely on a single scanner. Combine picklescan with network egress controls, sandboxed model loading environments, and cryptographic hash verification of model files from trusted sources.

  5. DETECT

    Alert on unexpected subprocess spawns or file creation (e.g., files in /tmp) during model load operations.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Article 15 - Accuracy, robustness and cybersecurity Article 9 - Risk management system
ISO 42001
A.10.3 - AI system monitoring and measurement A.6.1.3 - AI system risk criteria
NIST AI RMF
GOVERN 6.1 - Policies and procedures are in place for organizational risk MANAGE 2.2 - Mechanisms are in place and applied to sustain the value of deployed AI systems
OWASP LLM Top 10
LLM03:2025 - Supply Chain

Frequently Asked Questions

What is GHSA-f745-w6jp-hpxx?

picklescan, widely used to validate PyTorch model files before loading, fails to detect malicious payloads crafted with torch.utils.collect_env.run — giving teams a false sense of security. Any ML pipeline that downloads models from external sources and uses picklescan as the safety gate is fully exposed to supply chain RCE. Update picklescan to 0.0.28 immediately and adopt safetensors as the default model format going forward.

Is GHSA-f745-w6jp-hpxx actively exploited?

No confirmed active exploitation of GHSA-f745-w6jp-hpxx has been reported, but organizations should still patch proactively.

How to fix GHSA-f745-w6jp-hpxx?

1. PATCH: Upgrade picklescan to >= 0.0.28 immediately (PR #47 adds torch.utils.collect_env.run to the blocklist). 2. VERIFY: Audit all torch.utils.collect_env.run calls in scanned model files using: grep -r 'collect_env' on unpickled code or use modelscan as a secondary scanner. 3. FORMAT MIGRATION: Migrate from pickle-based .pt/.pth files to safetensors format — it is inherently safe and now the recommended format for HuggingFace models. 4. DEFENSE IN DEPTH: Never rely on a single scanner. Combine picklescan with network egress controls, sandboxed model loading environments, and cryptographic hash verification of model files from trusted sources. 5. DETECT: Alert on unexpected subprocess spawns or file creation (e.g., files in /tmp) during model load operations.

What systems are affected by GHSA-f745-w6jp-hpxx?

This vulnerability affects the following AI/ML architecture patterns: training pipelines, model serving, MLOps pipelines, model registries, data science notebooks.

What is the CVSS score for GHSA-f745-w6jp-hpxx?

No CVSS score has been assigned yet.

Technical Details

NVD Description

### Summary Using torch.utils.collect_env.run function, which is a pytorch library function to execute remote pickle file. ### Details The attack payload executes in the following steps: First, the attacker craft the payload by calling to torch.utils.collect_env.run function in reduce method Then when the victim after checking whether the pickle file is safe by using Picklescan library and this library doesn't dectect any dangerous functions, decide to pickle.load() this malicious pickle file, thus lead to remote code execution. ### PoC ``` import torch.utils.collect_env as collect_env class EvilTorchUtilsCollectEnvRun: def __reduce__(self): command = 'touch /tmp/collect_env_run_success' return collect_env.run, (command,) ``` ### Impact Who is impacted? Any organization or individual relying on picklescan to detect malicious pickle files inside PyTorch models. What is the impact? Attackers can embed malicious code in pickle file that remains undetected but executes when the pickle file is loaded. Supply Chain Attack: Attackers can distribute infected pickle files across ML models, APIs, or saved Python objects. ### Corresponding https://github.com/FredericDT https://github.com/Qhaoduoyu

Exploitation Scenario

Attacker publishes a PyTorch model on HuggingFace or a private artifact registry, embedding a payload via torch.utils.collect_env.run in the pickle __reduce__ method — the payload executes an arbitrary OS command (reverse shell, credential harvester, cryptominer). The victim organization's MLOps pipeline downloads the model as part of fine-tuning or evaluation workflow, runs picklescan which returns clean, and proceeds to torch.load() the file. The payload executes with the privileges of the ML worker process, which in AWS SageMaker or GCP Vertex AI typically has an IAM role with S3/GCS read access, potentially exposing training datasets, model weights, and environment secrets.

Timeline

Published
August 22, 2025
Last Modified
August 22, 2025
First Seen
March 24, 2026

Related Vulnerabilities