GHSA-9gvj-pp9x-gcfr: picklescan: detection bypass allows malicious pickle exec

GHSA-9gvj-pp9x-gcfr HIGH
Published August 12, 2025
CISO Take

Picklescan and modelscan — the primary tools organizations use to gate malicious ML model ingestion — are trivially bypassable with a 5-line Python payload. Any model scanned before picklescan v0.0.27 that was approved as 'clean' should be treated as unverified. Upgrade immediately and re-scan your model inventory; Hugging Face's online scanner was also affected, meaning hub-sourced models are at risk.

Risk Assessment

HIGH. The exploit is trivial (public PoC exists, no ML expertise required), requires zero privileges, and completely neutralizes your primary pickle-based security control. The blast radius is significant: pickle deserialization is semantically equivalent to arbitrary code execution — there is no sandbox. Any organization relying on these scanners as a security gate is currently operating with a false sense of assurance.

Affected Systems

Package Ecosystem Vulnerable Range Patched
picklescan pip < 0.0.27 0.0.27
401 3 dependents Pushed 2mo ago 95% patched ~12d to patch Full package profile →

Do you use picklescan? You're affected.

Severity & Risk

CVSS 3.1
N/A
EPSS
N/A
Exploitation Status
No known exploitation
Sophistication
Trivial

Recommended Action

6 steps
  1. Patch immediately: upgrade picklescan to >= 0.0.27.

  2. Re-scan your model inventory — invalidate all previous scan approvals from vulnerable versions.

  3. Never cache scan results; re-validate on each model load.

  4. Defense-in-depth: deploy model loading in sandboxed environments (containers with restricted syscalls, seccomp profiles) regardless of scan results.

  5. Prefer serialization-safe formats (safetensors, ONNX) over pickle-based formats wherever your stack allows.

  6. Detection: monitor your scanner pipelines for unexpected exceptions — this bypass surfaces as an exception in scanner logs before the false-clean is returned.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Art. 9 - Risk Management System
ISO 42001
8.4 - AI system supply chain
NIST AI RMF
MANAGE 2.2 - Mechanisms are in place and applied to sustain the value of deployed AI systems and to prevent or mitigate risks
OWASP LLM Top 10
LLM05 - Supply Chain Vulnerabilities

Frequently Asked Questions

What is GHSA-9gvj-pp9x-gcfr?

Picklescan and modelscan — the primary tools organizations use to gate malicious ML model ingestion — are trivially bypassable with a 5-line Python payload. Any model scanned before picklescan v0.0.27 that was approved as 'clean' should be treated as unverified. Upgrade immediately and re-scan your model inventory; Hugging Face's online scanner was also affected, meaning hub-sourced models are at risk.

Is GHSA-9gvj-pp9x-gcfr actively exploited?

No confirmed active exploitation of GHSA-9gvj-pp9x-gcfr has been reported, but organizations should still patch proactively.

How to fix GHSA-9gvj-pp9x-gcfr?

1. Patch immediately: upgrade picklescan to >= 0.0.27. 2. Re-scan your model inventory — invalidate all previous scan approvals from vulnerable versions. 3. Never cache scan results; re-validate on each model load. 4. Defense-in-depth: deploy model loading in sandboxed environments (containers with restricted syscalls, seccomp profiles) regardless of scan results. 5. Prefer serialization-safe formats (safetensors, ONNX) over pickle-based formats wherever your stack allows. 6. Detection: monitor your scanner pipelines for unexpected exceptions — this bypass surfaces as an exception in scanner logs before the false-clean is returned.

What systems are affected by GHSA-9gvj-pp9x-gcfr?

This vulnerability affects the following AI/ML architecture patterns: model serving, training pipelines, MLOps/model registries, model supply chain, CI/CD ML pipelines.

What is the CVSS score for GHSA-9gvj-pp9x-gcfr?

No CVSS score has been assigned yet.

Technical Details

NVD Description

### Details There's a parsing logic error in picklescan and modelscan while trying to deal with opcode `STACK_GLOBAL`. Function `_list_globals` when handling `STACK_GLOBAL` at position `n`, it is expected to track two arguments but in wrong range. The loop only consider the range from `1` to `n-1` but forgets to consider the opcode at position `0`. The correct range should be `0` to `n-1`. Attacker can put arg in position `0`, thus the parser can only tract one argument. Then, the exception https://github.com/mmaitre314/picklescan/blob/2a8383cfeb4158567f9770d86597300c9e508d0f/src/picklescan/scanner.py#L281 will be triggered. Thus it can cause detection bypass since the malicious pickle file will trigger unexpected exceptions. Example: ``` 0: S STRING 'os' --> arg 0: STRING (untracked argument due to wrong scanning range) 6: S STRING 'system' --> arg 1: STRING (tracked argument) 16: \x93 STACK_GLOBAL 17: S STRING 'ls' 23: \x85 TUPLE1 24: R REDUCE 25: . STOP ``` ### PoC ``` python import pickle payload = b"S'os'\nS'system'\n\x93S'ls'\n\x85R." with open('bad_pickle.pkl', 'wb') as f: f.write(payload) pickle.load(open('bad_pickle.pkl', 'rb')) ``` ### Impact Detection bypass in both picklescan and modelscan. Note that it also affects the online hugging face pickle scanners, making the malicious pickle file bypass the detection. ### Fix To fix the range here, change `range(1, n)` to `range(1, n+1)` to ensure that `n-offset` stays within the range of `0` to `n`. https://github.com/mmaitre314/picklescan/blob/2a8383cfeb4158567f9770d86597300c9e508d0f/src/picklescan/scanner.py#L255

Exploitation Scenario

An adversary wants to distribute a backdoored model on Hugging Face. They craft a pickle payload placing the malicious opcode (os.system or subprocess.run) at position 0 of the STACK_GLOBAL argument sequence — the position that picklescan's off-by-one bug ignores. The scanner encounters an untracked argument, throws an internal exception, catches it silently, and returns 'no threats found'. The model passes Hugging Face's safety check and is published publicly. Any ML engineer who loads it via transformers.AutoModel.from_pretrained() or torch.load() executes arbitrary OS commands on their workstation, CI runner, or GPU training cluster.

Timeline

Published
August 12, 2025
Last Modified
August 12, 2025
First Seen
March 24, 2026

Related Vulnerabilities