picklescan, widely used to validate PyTorch model files before loading, fails to detect malicious payloads crafted with torch.utils.collect_env.run — giving teams a false sense of security. Any ML pipeline that downloads models from external sources and uses picklescan as the safety gate is fully exposed to supply chain RCE. Update picklescan to 0.0.28 immediately and adopt safetensors as the default model format going forward.
Risk Assessment
CVSS is unscored but operational risk is HIGH in AI/ML contexts. The vulnerability does not require network access or elevated privileges — it only requires a victim to load a pickle file that passed picklescan validation. With the PoC publicly available, weaponization requires zero expertise. Impact is full code execution on the host running the ML workload, which in cloud environments often has broad IAM permissions, access to training data, and model registries.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| picklescan | pip | <= 0.0.27 | 0.0.28 |
Do you use picklescan? You're affected.
Severity & Risk
Recommended Action
5 steps-
PATCH
Upgrade picklescan to >= 0.0.28 immediately (PR #47 adds torch.utils.collect_env.run to the blocklist).
-
VERIFY
Audit all torch.utils.collect_env.run calls in scanned model files using: grep -r 'collect_env' on unpickled code or use modelscan as a secondary scanner.
-
FORMAT MIGRATION
Migrate from pickle-based .pt/.pth files to safetensors format — it is inherently safe and now the recommended format for HuggingFace models.
-
DEFENSE IN DEPTH
Never rely on a single scanner. Combine picklescan with network egress controls, sandboxed model loading environments, and cryptographic hash verification of model files from trusted sources.
-
DETECT
Alert on unexpected subprocess spawns or file creation (e.g., files in /tmp) during model load operations.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is GHSA-f745-w6jp-hpxx?
picklescan, widely used to validate PyTorch model files before loading, fails to detect malicious payloads crafted with torch.utils.collect_env.run — giving teams a false sense of security. Any ML pipeline that downloads models from external sources and uses picklescan as the safety gate is fully exposed to supply chain RCE. Update picklescan to 0.0.28 immediately and adopt safetensors as the default model format going forward.
Is GHSA-f745-w6jp-hpxx actively exploited?
No confirmed active exploitation of GHSA-f745-w6jp-hpxx has been reported, but organizations should still patch proactively.
How to fix GHSA-f745-w6jp-hpxx?
1. PATCH: Upgrade picklescan to >= 0.0.28 immediately (PR #47 adds torch.utils.collect_env.run to the blocklist). 2. VERIFY: Audit all torch.utils.collect_env.run calls in scanned model files using: grep -r 'collect_env' on unpickled code or use modelscan as a secondary scanner. 3. FORMAT MIGRATION: Migrate from pickle-based .pt/.pth files to safetensors format — it is inherently safe and now the recommended format for HuggingFace models. 4. DEFENSE IN DEPTH: Never rely on a single scanner. Combine picklescan with network egress controls, sandboxed model loading environments, and cryptographic hash verification of model files from trusted sources. 5. DETECT: Alert on unexpected subprocess spawns or file creation (e.g., files in /tmp) during model load operations.
What systems are affected by GHSA-f745-w6jp-hpxx?
This vulnerability affects the following AI/ML architecture patterns: training pipelines, model serving, MLOps pipelines, model registries, data science notebooks.
What is the CVSS score for GHSA-f745-w6jp-hpxx?
No CVSS score has been assigned yet.
Technical Details
NVD Description
### Summary Using torch.utils.collect_env.run function, which is a pytorch library function to execute remote pickle file. ### Details The attack payload executes in the following steps: First, the attacker craft the payload by calling to torch.utils.collect_env.run function in reduce method Then when the victim after checking whether the pickle file is safe by using Picklescan library and this library doesn't dectect any dangerous functions, decide to pickle.load() this malicious pickle file, thus lead to remote code execution. ### PoC ``` import torch.utils.collect_env as collect_env class EvilTorchUtilsCollectEnvRun: def __reduce__(self): command = 'touch /tmp/collect_env_run_success' return collect_env.run, (command,) ``` ### Impact Who is impacted? Any organization or individual relying on picklescan to detect malicious pickle files inside PyTorch models. What is the impact? Attackers can embed malicious code in pickle file that remains undetected but executes when the pickle file is loaded. Supply Chain Attack: Attackers can distribute infected pickle files across ML models, APIs, or saved Python objects. ### Corresponding https://github.com/FredericDT https://github.com/Qhaoduoyu
Exploitation Scenario
Attacker publishes a PyTorch model on HuggingFace or a private artifact registry, embedding a payload via torch.utils.collect_env.run in the pickle __reduce__ method — the payload executes an arbitrary OS command (reverse shell, credential harvester, cryptominer). The victim organization's MLOps pipeline downloads the model as part of fine-tuning or evaluation workflow, runs picklescan which returns clean, and proceeds to torch.load() the file. The payload executes with the privileges of the ML worker process, which in AWS SageMaker or GCP Vertex AI typically has an IAM role with S3/GCS read access, potentially exposing training datasets, model weights, and environment secrets.
Weaknesses (CWE)
References
Timeline
Related Vulnerabilities
GHSA-vvpj-8cmc-gx39 10.0 picklescan: security flaw enables exploitation
Same package: picklescan GHSA-g38g-8gr9-h9xp 9.8 picklescan: Allowlist Bypass evades input filtering
Same package: picklescan GHSA-7wx9-6375-f5wh 9.8 picklescan: Allowlist Bypass evades input filtering
Same package: picklescan CVE-2025-1945 9.8 picklescan: ZIP flag bypass enables RCE in PyTorch models
Same package: picklescan GHSA-hgrh-qx5j-jfwx 8.8 picklescan: Protection Bypass circumvents security controls
Same package: picklescan
AI Threat Alert