GHSA-rrxm-2pvv-m66x

GHSA-rrxm-2pvv-m66x HIGH
Published December 30, 2025
CISO Take

Picklescan—the de facto tool for vetting PyTorch and pickle model files—can be trivially bypassed using a public PoC, meaning any pipeline treating a 'clean' Picklescan result as a security gate is fully exposed to arbitrary code execution. An attacker embeds a numpy gadget in a model file; Picklescan reports it safe, but pickle.load() runs attacker-controlled OS commands. Upgrade to Picklescan ≥ 0.0.33 immediately and treat every externally sourced model loaded under prior versions as potentially compromised.

Affected Systems

Package Ecosystem Vulnerable Range Patched
picklescan pip < 0.0.33 0.0.33

Do you use picklescan? You're affected.

Severity & Risk

CVSS 3.1
N/A
EPSS
N/A
KEV Status
Not in KEV
Sophistication
Trivial

Recommended Action

  1. 1) PATCH NOW: upgrade picklescan to ≥ 0.0.33 across all environments. 2) AUDIT: scan all requirements.txt, pyproject.toml, and Dockerfiles for pinned versions < 0.0.33. 3) MIGRATE: adopt SafeTensors as the default serialization format for model artifacts—it is architecturally incapable of embedding executable code. 4) SANDBOX: run model deserialization in isolated containers or VMs with no access to production secrets or credentials. 5) DETECT (interim): alert on pickle files containing references to 'numpy.f2py.crackfortran' or 'getlincoef' as a signature for this specific gadget. 6) ASSUME BREACH: any model loaded from an external source under a pre-patch Picklescan should be treated as potentially malicious until verified.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Art. 15 - Accuracy, Robustness and Cybersecurity Art. 9 - Risk Management System Article 17 - Quality management system Article 9 - Risk management system
ISO 42001
A.10.3 - AI supply chain management A.6.1.4 - Supply chain for AI systems A.9.4 - Assessment of AI system impacts A.9.5 - AI system security
NIST AI RMF
GOVERN 1.6 - Organizational AI risk policies GOVERN 6.1 - Policies and procedures for AI risk in the supply chain MANAGE 2.2 - Mechanisms to respond to and recover from AI risks
OWASP LLM Top 10
LLM03:2025 - Supply Chain Vulnerabilities

Technical Details

NVD Description

### Summary Picklescan uses the `numpy.f2py.crackfortran.getlincoef` function (a NumPy F2PY helper) to execute arbitrary Python code during unpickling. ### Details Picklescan fails to detect a malicious pickle that uses the gadget `numpy.f2py.crackfortran.getlincoef` in `__reduce__`, allowing arbitrary command execution when the pickle is loaded. A crafted object returns this function plus attacker‑controlled arguments; the scan reports the file as safe, but pickle.load() triggers execution. ### PoC ```python class PoC: def __reduce__(self): from numpy.f2py.crackfortran import getlincoef return getlincoef, ("__import__('os').system('whoami')", None) ``` ### Impact - Arbitrary code execution on the victim machine once they load the “scanned as safe” pickle / model file. - Affects any workflow relying on Picklescan to vet untrusted pickle / PyTorch artifacts. - Enables supply‑chain poisoning of shared model files. ### Credits - [ac0d3r](https://github.com/ac0d3r) - [Tong Liu](https://lyutoon.github.io), Institute of information engineering, CAS

Exploitation Scenario

An adversary publishes a malicious PyTorch checkpoint to HuggingFace Hub or injects it via a vendor/partner model-sharing workflow. The victim's automated MLOps pipeline runs Picklescan on the artifact—the scan returns clean. The pipeline proceeds to load the model for evaluation or fine-tuning. At deserialization, the embedded gadget executes a reverse shell payload with the privileges of the ML training process. On a cloud GPU instance, the attacker harvests training data, model weights, S3 credentials, and cloud IAM tokens stored in environment variables—achieving full supply chain compromise with a single crafted file.

Timeline

Published
December 30, 2025
Last Modified
December 30, 2025
First Seen
March 24, 2026