If your ML pipelines use picklescan to gate model loading, that security control is bypassable — patch to 0.0.30 immediately. Any model file validated by a pre-patch picklescan should be re-scanned. Long-term, mandate safetensors format over pickle for all internal and third-party model artifacts.
Risk Assessment
Despite the medium CVSS rating, operational risk is HIGH for ML-heavy organizations. The exploit is trivial (single-class PoC), targets a security control specifically trusted to make pickle loading 'safe', and has direct supply chain implications. Organizations that rely on picklescan as their sole defense against malicious model files have a false sense of security — the scanner passes a file that executes arbitrary commands on load. Exposure is broad: any team using PyTorch, Hugging Face Hub, or internal model registries with picklescan in the validation gate.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| picklescan | pip | < 0.0.30 | 0.0.30 |
Do you use picklescan? You're affected.
Severity & Risk
Recommended Action
6 steps-
PATCH
Upgrade picklescan to >= 0.0.30 immediately across all environments.
-
RE-SCAN: Re-validate any model file previously cleared by a vulnerable version — treat prior scans as untrusted.
-
DETECT
Audit CI/CD pipelines and model registries for picklescan version in use; add version enforcement to pipeline gates.
-
HARDEN
Adopt safetensors (.safetensors) as the organizational standard for model serialization — eliminates the pickle attack surface entirely.
-
DEFENSE IN DEPTH
Never rely on a single scanner; add network egress monitoring on model-loading processes to catch unexpected outbound connections as a secondary indicator of compromise.
-
INVENTORY
Identify all production services calling pickle.load() and ensure they are gated behind patched validation.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is GHSA-xp4f-hrf8-rxw7?
If your ML pipelines use picklescan to gate model loading, that security control is bypassable — patch to 0.0.30 immediately. Any model file validated by a pre-patch picklescan should be re-scanned. Long-term, mandate safetensors format over pickle for all internal and third-party model artifacts.
Is GHSA-xp4f-hrf8-rxw7 actively exploited?
No confirmed active exploitation of GHSA-xp4f-hrf8-rxw7 has been reported, but organizations should still patch proactively.
How to fix GHSA-xp4f-hrf8-rxw7?
1. PATCH: Upgrade picklescan to >= 0.0.30 immediately across all environments. 2. RE-SCAN: Re-validate any model file previously cleared by a vulnerable version — treat prior scans as untrusted. 3. DETECT: Audit CI/CD pipelines and model registries for picklescan version in use; add version enforcement to pipeline gates. 4. HARDEN: Adopt safetensors (.safetensors) as the organizational standard for model serialization — eliminates the pickle attack surface entirely. 5. DEFENSE IN DEPTH: Never rely on a single scanner; add network egress monitoring on model-loading processes to catch unexpected outbound connections as a secondary indicator of compromise. 6. INVENTORY: Identify all production services calling pickle.load() and ensure they are gated behind patched validation.
What systems are affected by GHSA-xp4f-hrf8-rxw7?
This vulnerability affects the following AI/ML architecture patterns: ML model serving pipelines, model registries, training pipelines, MLOps/CI-CD pipelines, agent frameworks loading external models.
What is the CVSS score for GHSA-xp4f-hrf8-rxw7?
No CVSS score has been assigned yet.
Technical Details
NVD Description
### Summary Using ensurepip._run_pip function, which is a built-in python library function to execute remote pickle file. ### Details The attack payload executes in the following steps: First, the attacker craft the payload by calling to ensurepip._run_pip function in reduce method Then when the victim after checking whether the pickle file is safe by using Picklescan library and this library doesn't dectect any dangerous functions, decide to pickle.load() this malicious pickle file, thus lead to remote code execution. ### PoC ``` from ensurepip import _run_pip class EvilEnsurepipRunpip: def __reduce__(self): payload = "[(__import__('os').system('whoami'),)]" return _run_pip, (payload,) ``` ### Impact Who is impacted? Any organization or individual relying on picklescan to detect malicious pickle files inside PyTorch models. What is the impact? Attackers can embed malicious code in pickle file that remains undetected but executes when the pickle file is loaded. Supply Chain Attack: Attackers can distribute infected pickle files across ML models, APIs, or saved Python objects. ### Corresponding https://github.com/FredericDT https://github.com/Qhaoduoyu
Exploitation Scenario
An adversary targeting a financial institution's fraud-detection ML pipeline crafts a malicious PyTorch model using the ensurepip._run_pip reduce gadget. The payload establishes a reverse shell or exfiltrates environment variables containing API keys. The attacker publishes this model to a public Hugging Face repository or submits it via a vendor. The victim's MLOps pipeline runs picklescan (< 0.0.30) as the security gate — it returns clean. The pipeline promotes the model to staging, where pickle.load() triggers RCE. The attacker now has code execution inside the ML inference environment, which typically has broad internal network access and cloud IAM credentials attached.
References
Timeline
Related Vulnerabilities
GHSA-vvpj-8cmc-gx39 10.0 picklescan: security flaw enables exploitation
Same package: picklescan GHSA-g38g-8gr9-h9xp 9.8 picklescan: Allowlist Bypass evades input filtering
Same package: picklescan GHSA-7wx9-6375-f5wh 9.8 picklescan: Allowlist Bypass evades input filtering
Same package: picklescan CVE-2025-1945 9.8 picklescan: ZIP flag bypass enables RCE in PyTorch models
Same package: picklescan GHSA-hgrh-qx5j-jfwx 8.8 picklescan: Protection Bypass circumvents security controls
Same package: picklescan
AI Threat Alert