picklescan — the de-facto security scanner used to validate ML model artifacts in CI/CD pipelines and platforms like Hugging Face — can be completely bypassed by embedding numpy.f2py eval() calls in pickle payloads. Any pipeline relying on picklescan < 0.0.33 as a security gate is providing false assurance: malicious models pass the scan and execute arbitrary OS commands on load. Patch to 0.0.33 immediately and treat picklescan as a failed single point of defense until you add layered controls.
Risk Assessment
HIGH. This is a security control bypass, not a vulnerability in a regular application — the defense itself is defeated. Exploitability is trivial: a working PoC is published in the advisory (7 lines of Python). Impact is unauthenticated RCE on any system that loads the crafted pickle — training servers, inference endpoints, MLOps workers. Blast radius is wide given picklescan's adoption across ML pipelines and model hubs. No active KEV listing, but the attack is straightforward enough that exploitation should be assumed following public disclosure.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| picklescan | pip | < 0.0.33 | 0.0.33 |
Do you use picklescan? You're affected.
Severity & Risk
Recommended Action
6 steps-
PATCH
Upgrade picklescan to >= 0.0.33 immediately — this is the only remediation.
-
AUDIT
If picklescan was your sole pickle safety control, treat all externally-sourced models loaded in the past 90 days as potentially compromised and investigate.
-
ELIMINATE PICKLE
Enforce safetensors format for model weights where possible — eliminates the pickle attack surface entirely.
-
SANDBOX
Load all third-party models in isolated containers with no network access, read-only filesystem mounts, and seccomp/AppArmor profiles.
-
LAYER DEFENSES
Do not rely on a single scanner; combine picklescan with static analysis, behavioral sandboxing, and cryptographic signature verification on trusted model artifacts.
-
DETECT
Add alerting for numpy.f2py imports and eval() calls in model-loading contexts; these are anomalous in normal inference/training workloads.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is GHSA-r8g5-cgf2-4m4m?
picklescan — the de-facto security scanner used to validate ML model artifacts in CI/CD pipelines and platforms like Hugging Face — can be completely bypassed by embedding numpy.f2py eval() calls in pickle payloads. Any pipeline relying on picklescan < 0.0.33 as a security gate is providing false assurance: malicious models pass the scan and execute arbitrary OS commands on load. Patch to 0.0.33 immediately and treat picklescan as a failed single point of defense until you add layered controls.
Is GHSA-r8g5-cgf2-4m4m actively exploited?
No confirmed active exploitation of GHSA-r8g5-cgf2-4m4m has been reported, but organizations should still patch proactively.
How to fix GHSA-r8g5-cgf2-4m4m?
1. PATCH: Upgrade picklescan to >= 0.0.33 immediately — this is the only remediation. 2. AUDIT: If picklescan was your sole pickle safety control, treat all externally-sourced models loaded in the past 90 days as potentially compromised and investigate. 3. ELIMINATE PICKLE: Enforce safetensors format for model weights where possible — eliminates the pickle attack surface entirely. 4. SANDBOX: Load all third-party models in isolated containers with no network access, read-only filesystem mounts, and seccomp/AppArmor profiles. 5. LAYER DEFENSES: Do not rely on a single scanner; combine picklescan with static analysis, behavioral sandboxing, and cryptographic signature verification on trusted model artifacts. 6. DETECT: Add alerting for numpy.f2py imports and eval() calls in model-loading contexts; these are anomalous in normal inference/training workloads.
What systems are affected by GHSA-r8g5-cgf2-4m4m?
This vulnerability affects the following AI/ML architecture patterns: model serving, training pipelines, MLOps pipelines, model registries, CI/CD model validation gates.
What is the CVSS score for GHSA-r8g5-cgf2-4m4m?
No CVSS score has been assigned yet.
Technical Details
NVD Description
### Summary An unsafe deserialization vulnerability allows an attacker to execute arbitrary code on the host when loading a malicious pickle payload from an untrusted source. ### Details The `numpy.f2py.crackfortran` module exposes many functions that call `eval` on arbitrary strings of values. This is the case for `getlincoef` and `_eval_length`. This list is probably not exhaustive. According to https://numpy.org/doc/stable/reference/security.html#advice-for-using-numpy-on-untrusted-data, the whole `numpy.f2py` should be considered unsafe when loading a pickle. ### PoC ```python from numpy.f2py.crackfortran import getlincoef class EvilClass: def __reduce__(self): payload = "__import__('os').system('echo \"successful attack\"')" return getlincoef, (payload, []) ``` ### Impact Who is impacted? Any organization or individual relying on `picklescan` to detect malicious pickle files from untrusted sources. What is the impact? Attackers can embed malicious code in pickle file that remains undetected but executes when the pickle file is loaded. Supply Chain Attack: Attackers can distribute infected pickle files across ML models, APIs, or saved Python objects. ### Note The problem was originally reported to the joblib project, but this was deemed unrelated to joblib itself. However, I checked that `picklescan` was indeed vulnerable.
Exploitation Scenario
An attacker publishes a 'fine-tuned Mistral' model to a public model hub. The model's pickle file uses numpy.f2py.crackfortran.getlincoef as the deserialization hook, passing a malicious eval() payload that runs os.system() to download and execute a reverse shell. A victim organization's CI/CD pipeline runs picklescan before ingesting the model — the scan returns clean. When the model is loaded in the training cluster or inference server, the payload fires, establishing persistent access or exfiltrating API keys and model weights. The attack is particularly damaging because the organization's own security gate (picklescan) provided explicit false assurance, likely bypassing additional human review.
Weaknesses (CWE)
References
Timeline
Related Vulnerabilities
GHSA-vvpj-8cmc-gx39 10.0 picklescan: security flaw enables exploitation
Same package: picklescan GHSA-g38g-8gr9-h9xp 9.8 picklescan: Allowlist Bypass evades input filtering
Same package: picklescan GHSA-7wx9-6375-f5wh 9.8 picklescan: Allowlist Bypass evades input filtering
Same package: picklescan CVE-2025-1945 9.8 picklescan: ZIP flag bypass enables RCE in PyTorch models
Same package: picklescan GHSA-hgrh-qx5j-jfwx 8.8 picklescan: Protection Bypass circumvents security controls
Same package: picklescan
AI Threat Alert