GHSA-rrxm-2pvv-m66x: picklescan: Code Injection enables RCE

GHSA-rrxm-2pvv-m66x HIGH
Published December 30, 2025
CISO Take

Picklescan—the de facto tool for vetting PyTorch and pickle model files—can be trivially bypassed using a public PoC, meaning any pipeline treating a 'clean' Picklescan result as a security gate is fully exposed to arbitrary code execution. An attacker embeds a numpy gadget in a model file; Picklescan reports it safe, but pickle.load() runs attacker-controlled OS commands. Upgrade to Picklescan ≥ 0.0.33 immediately and treat every externally sourced model loaded under prior versions as potentially compromised.

Risk Assessment

HIGH. The exploit is trivial—the PoC is a single Python class, publicly documented, requiring zero privileges and no special knowledge. Impact is full arbitrary code execution on the host deserializing the model, with access to all credentials, training data, and cloud resources in scope. Any organization using Picklescan as a model security gate in automated MLOps pipelines is directly exposed. Blast radius scales with pipeline automation: the less human review exists post-scan, the higher the risk.

Affected Systems

Package Ecosystem Vulnerable Range Patched
picklescan pip < 0.0.33 0.0.33
401 3 dependents Pushed 2mo ago 95% patched ~12d to patch Full package profile →

Do you use picklescan? You're affected.

Severity & Risk

CVSS 3.1
N/A
EPSS
N/A
Exploitation Status
No known exploitation
Sophistication
Trivial

Recommended Action

1 step
  1. 1) PATCH NOW: upgrade picklescan to ≥ 0.0.33 across all environments. 2) AUDIT: scan all requirements.txt, pyproject.toml, and Dockerfiles for pinned versions < 0.0.33. 3) MIGRATE: adopt SafeTensors as the default serialization format for model artifacts—it is architecturally incapable of embedding executable code. 4) SANDBOX: run model deserialization in isolated containers or VMs with no access to production secrets or credentials. 5) DETECT (interim): alert on pickle files containing references to 'numpy.f2py.crackfortran' or 'getlincoef' as a signature for this specific gadget. 6) ASSUME BREACH: any model loaded from an external source under a pre-patch Picklescan should be treated as potentially malicious until verified.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Art. 15 - Accuracy, Robustness and Cybersecurity Art. 9 - Risk Management System Article 17 - Quality management system Article 9 - Risk management system
ISO 42001
A.10.3 - AI supply chain management A.6.1.4 - Supply chain for AI systems A.9.4 - Assessment of AI system impacts A.9.5 - AI system security
NIST AI RMF
GOVERN 1.6 - Organizational AI risk policies GOVERN 6.1 - Policies and procedures for AI risk in the supply chain MANAGE 2.2 - Mechanisms to respond to and recover from AI risks
OWASP LLM Top 10
LLM03:2025 - Supply Chain Vulnerabilities

Frequently Asked Questions

What is GHSA-rrxm-2pvv-m66x?

Picklescan—the de facto tool for vetting PyTorch and pickle model files—can be trivially bypassed using a public PoC, meaning any pipeline treating a 'clean' Picklescan result as a security gate is fully exposed to arbitrary code execution. An attacker embeds a numpy gadget in a model file; Picklescan reports it safe, but pickle.load() runs attacker-controlled OS commands. Upgrade to Picklescan ≥ 0.0.33 immediately and treat every externally sourced model loaded under prior versions as potentially compromised.

Is GHSA-rrxm-2pvv-m66x actively exploited?

No confirmed active exploitation of GHSA-rrxm-2pvv-m66x has been reported, but organizations should still patch proactively.

How to fix GHSA-rrxm-2pvv-m66x?

1) PATCH NOW: upgrade picklescan to ≥ 0.0.33 across all environments. 2) AUDIT: scan all requirements.txt, pyproject.toml, and Dockerfiles for pinned versions < 0.0.33. 3) MIGRATE: adopt SafeTensors as the default serialization format for model artifacts—it is architecturally incapable of embedding executable code. 4) SANDBOX: run model deserialization in isolated containers or VMs with no access to production secrets or credentials. 5) DETECT (interim): alert on pickle files containing references to 'numpy.f2py.crackfortran' or 'getlincoef' as a signature for this specific gadget. 6) ASSUME BREACH: any model loaded from an external source under a pre-patch Picklescan should be treated as potentially malicious until verified.

What systems are affected by GHSA-rrxm-2pvv-m66x?

This vulnerability affects the following AI/ML architecture patterns: training pipelines, model serving, model registries, MLOps CI/CD pipelines, automated model evaluation pipelines.

What is the CVSS score for GHSA-rrxm-2pvv-m66x?

No CVSS score has been assigned yet.

Technical Details

NVD Description

### Summary Picklescan uses the `numpy.f2py.crackfortran.getlincoef` function (a NumPy F2PY helper) to execute arbitrary Python code during unpickling. ### Details Picklescan fails to detect a malicious pickle that uses the gadget `numpy.f2py.crackfortran.getlincoef` in `__reduce__`, allowing arbitrary command execution when the pickle is loaded. A crafted object returns this function plus attacker‑controlled arguments; the scan reports the file as safe, but pickle.load() triggers execution. ### PoC ```python class PoC: def __reduce__(self): from numpy.f2py.crackfortran import getlincoef return getlincoef, ("__import__('os').system('whoami')", None) ``` ### Impact - Arbitrary code execution on the victim machine once they load the “scanned as safe” pickle / model file. - Affects any workflow relying on Picklescan to vet untrusted pickle / PyTorch artifacts. - Enables supply‑chain poisoning of shared model files. ### Credits - [ac0d3r](https://github.com/ac0d3r) - [Tong Liu](https://lyutoon.github.io), Institute of information engineering, CAS

Exploitation Scenario

An adversary publishes a malicious PyTorch checkpoint to HuggingFace Hub or injects it via a vendor/partner model-sharing workflow. The victim's automated MLOps pipeline runs Picklescan on the artifact—the scan returns clean. The pipeline proceeds to load the model for evaluation or fine-tuning. At deserialization, the embedded gadget executes a reverse shell payload with the privileges of the ML training process. On a cloud GPU instance, the attacker harvests training data, model weights, S3 credentials, and cloud IAM tokens stored in environment variables—achieving full supply chain compromise with a single crafted file.

Timeline

Published
December 30, 2025
Last Modified
December 30, 2025
First Seen
March 24, 2026

Related Vulnerabilities