GHSA-9m3x-qqw2-h32h: picklescan: Deserialization enables RCE

GHSA-9m3x-qqw2-h32h HIGH
Published February 2, 2026
CISO Take

picklescan — a tool widely deployed to gate ML model loading from untrusted sources — can be bypassed with a public one-liner that uses getattr obfuscation to hide eval calls. Any MLOps pipeline or model-serving infrastructure that relies on picklescan as a security control is effectively unprotected against malicious pickle artifacts. Upgrade to picklescan >= 1.0.1 immediately and treat all pickle files previously cleared by older versions from external sources as potentially compromised.

Risk Assessment

HIGH. This vulnerability defeats a security control rather than exploiting a system directly, creating a false sense of protection in any organization using picklescan to gate model loading. Exploitability is trivial — the PoC is public and requires zero ML expertise. Impact is severe: successful exploitation achieves arbitrary code execution in whatever context loads the pickle file (model servers, training workers, inference APIs). Exposure is broad given pickle's pervasive use for ML serialization across sklearn, PyTorch legacy, XGBoost, and dozens of other frameworks. No authentication or privileges required.

Affected Systems

Package Ecosystem Vulnerable Range Patched
picklescan pip < 1.0.1 1.0.1
401 3 dependents Pushed 2mo ago 95% patched ~12d to patch Full package profile →

Do you use picklescan? You're affected.

Severity & Risk

CVSS 3.1
N/A
EPSS
N/A
Exploitation Status
No known exploitation
Sophistication
Trivial

Recommended Action

6 steps
  1. PATCH

    Upgrade picklescan to >= 1.0.1 immediately (pip install --upgrade picklescan).

  2. AUDIT

    Re-scan all pickle files previously approved by picklescan < 1.0.1 from external sources — treat them as untrusted until re-verified.

  3. SAFE FORMATS

    Prefer safetensors, ONNX, or JSON over pickle for model serialization wherever possible — eliminate the attack surface entirely.

  4. SANDBOXING

    Load any pickle files in isolated containers with no network egress, minimal filesystem access, and resource limits, even after scanning.

  5. DETECTION

    Monitor for unexpected child process spawning (os.system, subprocess) from Python processes involved in model loading.

  6. DEFENSE IN DEPTH

    Do not rely solely on any single scanner; combine hash verification, code signing, and allowlisting for model artifacts from external registries.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Art. 9 - Risk management system
ISO 42001
A.6.2.6 - AI system supply chain
NIST AI RMF
MANAGE 2.2 - Mechanisms to sustain the value of deployed AI systems are evaluated and applied
OWASP LLM Top 10
LLM03:2025 - Supply Chain Vulnerabilities

Frequently Asked Questions

What is GHSA-9m3x-qqw2-h32h?

picklescan — a tool widely deployed to gate ML model loading from untrusted sources — can be bypassed with a public one-liner that uses getattr obfuscation to hide eval calls. Any MLOps pipeline or model-serving infrastructure that relies on picklescan as a security control is effectively unprotected against malicious pickle artifacts. Upgrade to picklescan >= 1.0.1 immediately and treat all pickle files previously cleared by older versions from external sources as potentially compromised.

Is GHSA-9m3x-qqw2-h32h actively exploited?

No confirmed active exploitation of GHSA-9m3x-qqw2-h32h has been reported, but organizations should still patch proactively.

How to fix GHSA-9m3x-qqw2-h32h?

1. PATCH: Upgrade picklescan to >= 1.0.1 immediately (pip install --upgrade picklescan). 2. AUDIT: Re-scan all pickle files previously approved by picklescan < 1.0.1 from external sources — treat them as untrusted until re-verified. 3. SAFE FORMATS: Prefer safetensors, ONNX, or JSON over pickle for model serialization wherever possible — eliminate the attack surface entirely. 4. SANDBOXING: Load any pickle files in isolated containers with no network egress, minimal filesystem access, and resource limits, even after scanning. 5. DETECTION: Monitor for unexpected child process spawning (os.system, subprocess) from Python processes involved in model loading. 6. DEFENSE IN DEPTH: Do not rely solely on any single scanner; combine hash verification, code signing, and allowlisting for model artifacts from external registries.

What systems are affected by GHSA-9m3x-qqw2-h32h?

This vulnerability affects the following AI/ML architecture patterns: model serving, training pipelines, MLOps pipelines, model registries, CI/CD model evaluation.

What is the CVSS score for GHSA-9m3x-qqw2-h32h?

No CVSS score has been assigned yet.

Technical Details

NVD Description

### Summary An unsafe deserialization vulnerability allows any unauthenticated user to execute arbitrary code on the host loading a pickle payload from an untrusted source. ### Details It's possible to hide the `eval` call nested under another callable via `getattr`. ### PoC ```python import builtins class EvilClass: @staticmethod def _obfuscated_eval(payload): getattr(builtins, "eval")(payload) def __reduce__(self): payload = "__import__('os').system('echo \"successful attack\"')" return self._obfuscated_eval, (payload,) ``` ### Impact Who is impacted? Any organization or individual relying on picklescan to detect malicious pickle files from untrusted sources. What is the impact? Attackers can embed malicious code in pickle file that remains undetected but executes when the pickle file is loaded. Supply Chain Attack: Attackers can distribute infected pickle files to system that load serialized ML models, APIs, or saved Python objects from untrusted sources.

Exploitation Scenario

An adversary targets an MLOps platform that automatically pulls and benchmarks community-submitted models. They craft a malicious pickle file embedding a reverse shell via getattr obfuscation — the file passes picklescan inspection cleanly on pre-patch versions. The file is published to a public model hub or submitted as a model artifact via a pull request. When the automated evaluation pipeline loads the model for scoring, the payload executes with the privileges of the inference worker. The adversary gains shell access to the training infrastructure, from which they can exfiltrate training data, poison the model registry, harvest cloud credentials from environment variables, or persist access. No human interaction is required after initial artifact submission.

Timeline

Published
February 2, 2026
Last Modified
February 2, 2026
First Seen
March 24, 2026

Related Vulnerabilities