CVE-2025-10157: PickleScan: subclass bypass enables malicious model RCE

GHSA-f7qq-56ww-84cr HIGH PoC AVAILABLE CISA: TRACK*
Published September 10, 2025
CISO Take

If your ML pipeline uses PickleScan as a security gate before loading PyTorch models or pickle files, your defense is currently bypassable. Attackers can craft malicious models using subclasses of dangerous imports (e.g., asyncio submodules) that PickleScan flags only as 'Suspicious' instead of 'Dangerous', allowing them through. Patch to picklescan 0.0.31 immediately and audit any models scanned with older versions.

Risk Assessment

High risk for any organization using PickleScan as a trust boundary in their ML model pipeline. The vulnerability is particularly insidious because it undermines a security control specifically designed to catch this class of threat — organizations relying on it likely have no other compensating control for pickle-based RCE. EPSS is low (0.00108) indicating no known active exploitation yet, but the PoC is public and the bypass is trivially reproducible. Exposure surface includes HuggingFace model consumers, internal model registries, and any CI/CD pipeline that gates model promotion on PickleScan results.

Affected Systems

Package Ecosystem Vulnerable Range Patched
picklescan pip <= 0.0.30 0.0.31
402 3 dependents Pushed 2mo ago 95% patched ~12d to patch Full package profile →

Do you use picklescan? You're affected.

Severity & Risk

CVSS 3.1
8.3 / 10
EPSS
0.2%
chance of exploitation in 30 days
Higher than 44% of all CVEs
Exploitation Status
Exploit Available
Exploitation: MEDIUM
Sophistication
Moderate
Exploitation Confidence
medium
CISA SSVC: Public PoC
Public PoC indexed (trickest/cve)
Composite signal derived from CISA KEV, CISA SSVC, EPSS, trickest/cve, and Nuclei templates.

Attack Surface

AV AC PR UI S C I A
AV Network
AC Low
PR None
UI Required
S Unchanged
C High
I High
A Low

Recommended Action

5 steps
  1. PATCH

    Upgrade picklescan to >= 0.0.31 immediately (pip install --upgrade picklescan).

  2. AUDIT

    Identify all models that were scanned and approved using picklescan <= 0.0.30 — treat them as untrusted until rescanned.

  3. DEFENSE-IN-DEPTH: Do not rely solely on PickleScan. Add sandboxed model loading (e.g., subprocess isolation, gVisor/seccomp), hash/signature verification against trusted sources, and convert models to safe serialization formats (safetensors) where possible.

  4. DETECT

    Monitor for unexpected network connections or process spawns during model load operations.

  5. POLICY

    Block loading of pickle-based models from unverified sources regardless of scan results until patched version is deployed.

CISA SSVC Assessment

Decision Track*
Exploitation poc
Automatable No
Technical Impact partial

Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Article 9 - Risk management system
ISO 42001
A.6.2 - AI system supply chain
NIST AI RMF
MANAGE 2.2 - Mechanisms exist to sustain oversight of AI systems MAP 5.1 - Likelihood and magnitude of risks are characterized
OWASP LLM Top 10
LLM05:2025 - Supply Chain Vulnerabilities

Related AI Incidents (1)

Source: AI Incident Database (AIID)

Frequently Asked Questions

What is CVE-2025-10157?

If your ML pipeline uses PickleScan as a security gate before loading PyTorch models or pickle files, your defense is currently bypassable. Attackers can craft malicious models using subclasses of dangerous imports (e.g., asyncio submodules) that PickleScan flags only as 'Suspicious' instead of 'Dangerous', allowing them through. Patch to picklescan 0.0.31 immediately and audit any models scanned with older versions.

Is CVE-2025-10157 actively exploited?

Proof-of-concept exploit code is publicly available for CVE-2025-10157, increasing the risk of exploitation.

How to fix CVE-2025-10157?

1. PATCH: Upgrade picklescan to >= 0.0.31 immediately (pip install --upgrade picklescan). 2. AUDIT: Identify all models that were scanned and approved using picklescan <= 0.0.30 — treat them as untrusted until rescanned. 3. DEFENSE-IN-DEPTH: Do not rely solely on PickleScan. Add sandboxed model loading (e.g., subprocess isolation, gVisor/seccomp), hash/signature verification against trusted sources, and convert models to safe serialization formats (safetensors) where possible. 4. DETECT: Monitor for unexpected network connections or process spawns during model load operations. 5. POLICY: Block loading of pickle-based models from unverified sources regardless of scan results until patched version is deployed.

What systems are affected by CVE-2025-10157?

This vulnerability affects the following AI/ML architecture patterns: model serving, training pipelines, model registries, ML CI/CD pipelines, model import workflows.

What is the CVSS score for CVE-2025-10157?

CVE-2025-10157 has a CVSS v3.1 base score of 8.3 (HIGH). The EPSS exploitation probability is 0.21%.

Technical Details

NVD Description

### Summary The vulnerability allows malicious actors to bypass PickleScan's unsafe globals check, leading to potential arbitrary code execution. The issue stems from PickleScan's strict check for full module names against its list of unsafe globals. By using subclasses of dangerous imports instead of the exact module names, attackers can circumvent the check and inject malicious payloads. ### PoC 1. Download a model that uses the `asyncio` package: ```wget https://huggingface.co/iluem/linux_pkl/resolve/main/asyncio_asyncio_unix_events___UnixSubprocessTransport__start.pkl``` 2. Check with PickleScan: `picklescan -p asyncio_asyncio_unix_events___UnixSubprocessTransport__start.pkl -g` **Expected Result:** PickleScan should identify all `asyncio` import as dangerous and flag the pickle file as malicious as `asyncio` is in `_unsafe_globals` dictionary. **Actual Result:** ![Screenshot 2025-06-29 at 14 13 38](https://github.com/user-attachments/assets/39467f50-5cdb-4c25-bb37-35c03dc4a626) PickleScan marked the import as Suspicious, failing to identify it as a dangerous import. ### Impact **Severity**: High **Affected Users**: Any organization, like HuggingFace, or individual using PickleScan to analyze PyTorch models or other files distributed as ZIP archives for malicious pickle content. **Impact Details**: Attackers can craft malicious PyTorch models containing embedded pickle payloads, package them into ZIP archives, and bypass the PickleScan check by using subclasses of dangerous imports. This could lead to arbitrary code execution on the user's system when these malicious files are processed or loaded. **Recommendations:** **Replace:** https://github.com/mmaitre314/picklescan/blob/2a8383cfeb4158567f9770d86597300c9e508d0f/src/picklescan/scanner.py#L309C9-L309C54 ` unsafe_filter = _unsafe_globals.get(g.module)` by: ``` matched_key = None if imported_global.module: for key_in_globals in unsafe_globals.keys(): # Check if imported_global.module starts with the key_in_globals AND # (it's the first match OR this key is more specific than the previous match) # AND imported_global.module is exactly the key or imported_global.module is key + '.' + something if imported_global.module.startswith(key_in_globals): if (imported_global.module == key_in_globals or # Exact match (len(imported_global.module) > len(key_in_globals) and imported_global.module[len(key_in_globals)] == '.')): # Submodule match if matched_key is None or len(key_in_globals) > len(matched_key): matched_key = key_in_globals if matched_key: unsafe_filter = unsafe_globals[matched_key] ```

Exploitation Scenario

Adversary crafts a malicious PyTorch model that uses asyncio.unix_events._UnixSubprocessTransport (or another subclass of a dangerous asyncio module) to embed an arbitrary code execution payload. The model is packaged as a standard .pt ZIP archive and uploaded to HuggingFace or distributed via a compromised model registry. Victim organization's CI/CD pipeline downloads the model and runs picklescan <= 0.0.30 as a security gate — scan returns 'Suspicious' (not 'Dangerous'), which many pipelines treat as a warning rather than a block. Model proceeds to production. When a data scientist or serving infrastructure loads the model via torch.load(), the embedded pickle payload executes, achieving RCE under the context of the ML process — potentially with access to cloud credentials, training data, or lateral movement into the broader environment.

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:L

Timeline

Published
September 10, 2025
Last Modified
September 18, 2025
First Seen
March 24, 2026

Related Vulnerabilities