picklescan — the de-facto ML model safety scanner — has a scanner bypass that allows malicious pickle files to pass as clean while executing arbitrary code on load. Any pipeline using picklescan < 0.0.33 as a security gate is providing a false sense of security, which is worse than no gate at all. Patch to v0.0.33 immediately and re-scan every model file previously cleared by older versions.
Risk Assessment
HIGH effective severity despite unassigned CVSS. The vulnerability does not just introduce a new attack path — it nullifies an existing compensating control that ML teams explicitly trust for model safety validation. Exploitability is moderate: requires numpy in the target environment (near-universal in ML stacks) and the ability to deliver a malicious file into the scan queue. Blast radius is significant: RCE on model serving or training infrastructure, not a sandbox escape — full host-level impact wherever pickle.load() executes.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| picklescan | pip | < 0.0.33 | 0.0.33 |
Do you use picklescan? You're affected.
Severity & Risk
Recommended Action
5 steps-
Patch: Update picklescan to >= 0.0.33 immediately across all environments.
-
Re-scan: Retroactively re-validate all model files previously cleared by older picklescan versions — treat prior results as untrusted.
-
Migrate format: Where possible, switch PyTorch model storage to safetensors — eliminates the pickle deserialization attack surface entirely.
-
Defense-in-depth: Never rely on a single scanner as the sole control; add sandboxed model loading (isolated containers with no network access and restricted syscalls).
-
Detection: Alert on anomalous child process spawning from ML worker processes and unusual network connections originating from model loading jobs.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is GHSA-cffc-mxrf-mhh4?
picklescan — the de-facto ML model safety scanner — has a scanner bypass that allows malicious pickle files to pass as clean while executing arbitrary code on load. Any pipeline using picklescan < 0.0.33 as a security gate is providing a false sense of security, which is worse than no gate at all. Patch to v0.0.33 immediately and re-scan every model file previously cleared by older versions.
Is GHSA-cffc-mxrf-mhh4 actively exploited?
No confirmed active exploitation of GHSA-cffc-mxrf-mhh4 has been reported, but organizations should still patch proactively.
How to fix GHSA-cffc-mxrf-mhh4?
1. Patch: Update picklescan to >= 0.0.33 immediately across all environments. 2. Re-scan: Retroactively re-validate all model files previously cleared by older picklescan versions — treat prior results as untrusted. 3. Migrate format: Where possible, switch PyTorch model storage to safetensors — eliminates the pickle deserialization attack surface entirely. 4. Defense-in-depth: Never rely on a single scanner as the sole control; add sandboxed model loading (isolated containers with no network access and restricted syscalls). 5. Detection: Alert on anomalous child process spawning from ML worker processes and unusual network connections originating from model loading jobs.
What systems are affected by GHSA-cffc-mxrf-mhh4?
This vulnerability affects the following AI/ML architecture patterns: training pipelines, model serving, MLOps CI/CD pipelines, model registries, PyTorch model loading workflows.
What is the CVSS score for GHSA-cffc-mxrf-mhh4?
No CVSS score has been assigned yet.
Technical Details
NVD Description
### Summary Picklescan uses numpy.f2py.crackfortran.param_eval, which is a function in numpy to execute remote pickle files. ### Details The attack payload executes in the following steps: - First, the attacker crafts the payload by calling the numpy.f2py.crackfortran.param_eval function via reduce method. - Then, when the victim checks whether the pickle file is safe by using the Picklescan library and this library doesn't detect any dangerous functions, they decide to use pickle.load() on this malicious pickle file, thus leading to remote code execution. ### PoC ``` class RCE: def __reduce__(self): from numpy.f2py.crackfortran import param_eval return (param_eval,("os.system('ls')",None,None,None)) ``` ### Impact Any organization or individual relying on picklescan to detect malicious pickle files inside PyTorch models. Attackers can embed malicious code in pickle file that remains undetected but executes when the pickle file is loaded. Attackers can distribute infected pickle files across ML models, APIs, or saved Python objects. ### Report by Pinji Chen (cpj24@mails.tsinghua.edu.cn) from the NISL lab (https://netsec.ccert.edu.cn/about) at Tsinghua University, Guanheng Liu (coolwind326@gmail.com).
Exploitation Scenario
An adversary targets an organization's ML model supply chain. They craft a malicious PyTorch .pkl file using numpy.f2py.crackfortran.param_eval as the __reduce__ callable — a function not on picklescan's blocklist. The file is uploaded to a public model registry (e.g., HuggingFace Hub) as a legitimate-looking fine-tuned model. The victim organization's automated pipeline downloads and scans it with picklescan — scan returns clean. Trusting the result, the pipeline calls pickle.load() and the payload executes: a reverse shell, credential harvester, or persistent backdoor planted in the training environment. From there, the adversary pivots to exfiltrate proprietary training data or poison downstream model artifacts.
Weaknesses (CWE)
References
Timeline
Related Vulnerabilities
GHSA-vvpj-8cmc-gx39 10.0 picklescan: security flaw enables exploitation
Same package: picklescan GHSA-g38g-8gr9-h9xp 9.8 picklescan: Allowlist Bypass evades input filtering
Same package: picklescan GHSA-7wx9-6375-f5wh 9.8 picklescan: Allowlist Bypass evades input filtering
Same package: picklescan CVE-2025-1945 9.8 picklescan: ZIP flag bypass enables RCE in PyTorch models
Same package: picklescan GHSA-hgrh-qx5j-jfwx 8.8 picklescan: Protection Bypass circumvents security controls
Same package: picklescan
AI Threat Alert