If your ML pipeline uses picklescan to gate model loading, that control is broken — attackers can craft pickle payloads that pass scanning clean and execute arbitrary code on load. Update picklescan to 0.0.29 immediately and treat any model scanned with a prior version as untrusted. Implement defense-in-depth: picklescan alone was never sufficient as a trust boundary for model files.
Risk Assessment
HIGH for organizations that rely on picklescan as their primary or sole defense before loading pickle-based ML models. The vulnerability is trivially exploitable post-disclosure — the PoC is public and requires no special ML knowledge. The blast radius extends beyond individual deployments: any CI/CD pipeline, model registry, or data science workflow that loads community or third-party models with a picklescan pass-gate is compromised. The false sense of security introduced by a bypassed scanner is arguably more dangerous than having no scanner at all.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| picklescan | pip | < 0.0.29 | 0.0.29 |
Do you use picklescan? You're affected.
Severity & Risk
Recommended Action
6 steps-
PATCH
Upgrade picklescan to >= 0.0.29 immediately across all environments — dev, CI/CD, staging, production.
-
AUDIT
Review model loading logs for recently ingested pickle files that were scanned with prior versions; treat them as potentially compromised.
-
QUARANTINE
Do not load previously scanned models from untrusted or community sources until rescanned with the patched version.
-
DEFENSE-IN-DEPTH: Do not rely solely on picklescan; layer with: (a) loading models in isolated containers/sandboxes, (b) preferring safetensors format over pickle where possible, (c) pinning model hashes and verifying provenance.
-
DETECT
Monitor for anomalous subprocess spawning or network calls originating from Python model-loading processes — RCE payloads typically execute shell commands or initiate outbound connections.
-
POLICY
Enforce that models loaded from public registries (Hugging Face, etc.) must use safetensors format for production workloads.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is GHSA-f54q-57x4-jg88?
If your ML pipeline uses picklescan to gate model loading, that control is broken — attackers can craft pickle payloads that pass scanning clean and execute arbitrary code on load. Update picklescan to 0.0.29 immediately and treat any model scanned with a prior version as untrusted. Implement defense-in-depth: picklescan alone was never sufficient as a trust boundary for model files.
Is GHSA-f54q-57x4-jg88 actively exploited?
No confirmed active exploitation of GHSA-f54q-57x4-jg88 has been reported, but organizations should still patch proactively.
How to fix GHSA-f54q-57x4-jg88?
1. PATCH: Upgrade picklescan to >= 0.0.29 immediately across all environments — dev, CI/CD, staging, production. 2. AUDIT: Review model loading logs for recently ingested pickle files that were scanned with prior versions; treat them as potentially compromised. 3. QUARANTINE: Do not load previously scanned models from untrusted or community sources until rescanned with the patched version. 4. DEFENSE-IN-DEPTH: Do not rely solely on picklescan; layer with: (a) loading models in isolated containers/sandboxes, (b) preferring safetensors format over pickle where possible, (c) pinning model hashes and verifying provenance. 5. DETECT: Monitor for anomalous subprocess spawning or network calls originating from Python model-loading processes — RCE payloads typically execute shell commands or initiate outbound connections. 6. POLICY: Enforce that models loaded from public registries (Hugging Face, etc.) must use safetensors format for production workloads.
What systems are affected by GHSA-f54q-57x4-jg88?
This vulnerability affects the following AI/ML architecture patterns: model serving, training pipelines, ML model registries, MLOps CI/CD pipelines, data science notebooks.
What is the CVSS score for GHSA-f54q-57x4-jg88?
No CVSS score has been assigned yet.
Technical Details
NVD Description
### Summary Using lib2to3.pgen2.grammar.Grammar.loads, which is a built-in python library function to execute remote pickle file. ### Details The attack payload executes in the following steps: First, the attacker craft the payload by calling to lib2to3.pgen2.grammar.Grammar.loads function in reduce method Then when the victim after checking whether the pickle file is safe by using Picklescan library and this library doesn't dectect any dangerous functions, decide to pickle.load() this malicious pickle file, thus lead to remote code execution. ### PoC ``` class Evil: def __reduce__(self): import os return (os.system, ('whoami',)) class EvilLib2to3Pgen2GrammarLoads: def __reduce__(self): from lib2to3.pgen2.grammar import Grammar payload = pickle.dumps(Evil()) # payload = b'\x80\x04\x95!\x00\x00\x00\x00\x00\x00\x00\x8c\x05posix\x94\x8c\x06system\x94\x93\x94\x8c\x06whoami\x94\x85\x94R\x94.' return Grammar().loads, (payload,) ``` ### Impact Who is impacted? Any organization or individual relying on picklescan to detect malicious pickle files inside PyTorch models. What is the impact? Attackers can embed malicious code in pickle file that remains undetected but executes when the pickle file is loaded. Supply Chain Attack: Attackers can distribute infected pickle files across ML models, APIs, or saved Python objects. ### Corresponding https://github.com/FredericDT https://github.com/Qhaoduoyu
Exploitation Scenario
An adversary targets an organization that uses picklescan to vet PyTorch models before deploying them to a model-serving endpoint. The attacker crafts a malicious .pt file where the serialized object's __reduce__ method invokes Grammar.loads wrapping an inner pickle payload that calls os.system or subprocess. The outer picklescan check sees only a call to a standard lib2to3 stdlib function and raises no alert. The attacker publishes this file to Hugging Face under a popular model namespace (typosquatting or compromising an existing account). A data scientist on the victim's team pulls the model, runs the organization's standard picklescan validation, sees a clean result, and loads it into their training pipeline. At pickle.load() time, the inner payload executes — delivering a reverse shell or establishing persistence on the ML training host, which typically has broad access to training data, cloud credentials, and production model registries.
References
Timeline
Related Vulnerabilities
GHSA-vvpj-8cmc-gx39 10.0 picklescan: security flaw enables exploitation
Same package: picklescan GHSA-g38g-8gr9-h9xp 9.8 picklescan: Allowlist Bypass evades input filtering
Same package: picklescan GHSA-7wx9-6375-f5wh 9.8 picklescan: Allowlist Bypass evades input filtering
Same package: picklescan CVE-2025-1945 9.8 picklescan: ZIP flag bypass enables RCE in PyTorch models
Same package: picklescan GHSA-hgrh-qx5j-jfwx 8.8 picklescan: Protection Bypass circumvents security controls
Same package: picklescan
AI Threat Alert