GHSA-f4x7-rfwp-v3xw: picklescan: scanner bypass enables RCE via PyTorch function
GHSA-f4x7-rfwp-v3xw MEDIUMOrganizations using picklescan ≤0.0.27 as a security gate for ML model files have a false sense of protection — attackers craft malicious pickle files that pass validation and execute arbitrary code on load. Upgrade picklescan to 0.0.28 immediately and treat any model loaded after trusting a vulnerable picklescan scan as potentially compromised. Longer-term, migrate ML serialization to safetensors or ONNX to eliminate pickle-based attack surface entirely.
Risk Assessment
Effective risk is higher than the CVSS 'medium' label implies. This vulnerability does not affect a general application — it defeats a security control that organizations deliberately deployed to protect against pickle-based RCE in ML pipelines. Any team using picklescan as their primary or sole defense for model intake has zero protection after this bypass. The PoC is trivially reproducible, requiring only basic Python knowledge and a PyTorch import, meaning exploitation is accessible to low-skill attackers.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| picklescan | pip | <= 0.0.27 | 0.0.28 |
Do you use picklescan? You're affected.
Severity & Risk
Recommended Action
6 steps-
PATCH
Upgrade picklescan to ≥0.0.28 immediately — this is the authoritative fix per the maintainer.
-
AUDIT
Review all pickle files loaded after trusting picklescan validation in the past 90 days, particularly externally-sourced models.
-
CONTAINMENT
Load untrusted pickle files only in sandboxed environments (containers with no network egress, restricted syscalls via seccomp/AppArmor).
-
MIGRATE
Adopt safetensors (Hugging Face) or ONNX for model serialization — these formats avoid pickle-based deserialization entirely.
-
DEFENSE-IN-DEPTH: Layer with behavioral monitoring — alert on unexpected process spawns, outbound connections, or file writes during model load operations.
-
DETECT
Add yara or static analysis rules that flag
evaluate_guards_expressionappearing inside pickle REDUCE opcodes.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is GHSA-f4x7-rfwp-v3xw?
Organizations using picklescan ≤0.0.27 as a security gate for ML model files have a false sense of protection — attackers craft malicious pickle files that pass validation and execute arbitrary code on load. Upgrade picklescan to 0.0.28 immediately and treat any model loaded after trusting a vulnerable picklescan scan as potentially compromised. Longer-term, migrate ML serialization to safetensors or ONNX to eliminate pickle-based attack surface entirely.
Is GHSA-f4x7-rfwp-v3xw actively exploited?
No confirmed active exploitation of GHSA-f4x7-rfwp-v3xw has been reported, but organizations should still patch proactively.
How to fix GHSA-f4x7-rfwp-v3xw?
1. PATCH: Upgrade picklescan to ≥0.0.28 immediately — this is the authoritative fix per the maintainer. 2. AUDIT: Review all pickle files loaded after trusting picklescan validation in the past 90 days, particularly externally-sourced models. 3. CONTAINMENT: Load untrusted pickle files only in sandboxed environments (containers with no network egress, restricted syscalls via seccomp/AppArmor). 4. MIGRATE: Adopt safetensors (Hugging Face) or ONNX for model serialization — these formats avoid pickle-based deserialization entirely. 5. DEFENSE-IN-DEPTH: Layer with behavioral monitoring — alert on unexpected process spawns, outbound connections, or file writes during model load operations. 6. DETECT: Add yara or static analysis rules that flag `evaluate_guards_expression` appearing inside pickle REDUCE opcodes.
What systems are affected by GHSA-f4x7-rfwp-v3xw?
This vulnerability affects the following AI/ML architecture patterns: model serving, training pipelines, ML model registries, model intake pipelines, MLOps CI/CD.
What is the CVSS score for GHSA-f4x7-rfwp-v3xw?
No CVSS score has been assigned yet.
Technical Details
NVD Description
### Summary Using torch.fx.experimental.symbolic_shapes.ShapeEnv.evaluate_guards_expression function, which is a pytorch library function to execute remote pickle file. ### Details The attack payload executes in the following steps: First, the attacker craft the payload by calling to torch.fx.experimental.symbolic_shapes.ShapeEnv.evaluate_guards_expression function in reduce method Then when the victim after checking whether the pickle file is safe by using Picklescan library and this library doesn't dectect any dangerous functions, decide to pickle.load() this malicious pickle file, thus lead to remote code execution. ### PoC ``` import types import torch.fx.experimental.symbolic_shapes as symbolic_shapes class EvilTorchFxSymbolicShapesEvaluateGuardsExpression: def __reduce__(self): fake_self = str code = "__import__('os').system('whoami')" args = [] return symbolic_shapes.ShapeEnv.evaluate_guards_expression, (fake_self, code, args) ``` ### Impact Who is impacted? Any organization or individual relying on picklescan to detect malicious pickle files inside PyTorch models. What is the impact? Attackers can embed malicious code in pickle file that remains undetected but executes when the pickle file is loaded. Supply Chain Attack: Attackers can distribute infected pickle files across ML models, APIs, or saved Python objects. ### Corresponding https://github.com/FredericDT https://github.com/Qhaoduoyu
Exploitation Scenario
Attacker publishes a malicious PyTorch model to Hugging Face Hub or a shared model registry, disguised as a legitimate fine-tuned checkpoint. The pickle file encodes a `__reduce__` method that calls `torch.fx.experimental.symbolic_shapes.ShapeEnv.evaluate_guards_expression` with a Python expression delivering a reverse shell or credential harvesting payload. An MLOps team runs picklescan ≤0.0.27 on the downloaded model as part of their model intake CI/CD gate — the scan returns clean. The model is approved and loaded into a training pipeline or inference server. Code executes in the ML workload context, which typically has access to cloud IAM credentials, training datasets in S3/GCS, and internal APIs — giving the attacker a high-value foothold.
Weaknesses (CWE)
References
Timeline
Related Vulnerabilities
GHSA-vvpj-8cmc-gx39 10.0 picklescan: security flaw enables exploitation
Same package: picklescan GHSA-g38g-8gr9-h9xp 9.8 picklescan: Allowlist Bypass evades input filtering
Same package: picklescan GHSA-7wx9-6375-f5wh 9.8 picklescan: Allowlist Bypass evades input filtering
Same package: picklescan CVE-2025-1945 9.8 picklescan: ZIP flag bypass enables RCE in PyTorch models
Same package: picklescan GHSA-hgrh-qx5j-jfwx 8.8 picklescan: Protection Bypass circumvents security controls
Same package: picklescan
AI Threat Alert