Critical deserialization flaw in PyTorch allows arbitrary code execution even when using the supposedly safe weights_only=True flag — the exact control teams rely on to safely load untrusted models. Any ML pipeline using scio <= 1.0.0 or torch <= 2.5.1 is exposed, regardless of whether they followed previous PyTorch hardening guidance. Upgrade torch to >= 2.6 immediately and audit all torch.load call sites in your codebase.
Risk Assessment
High organizational risk for any team running ML inference or training pipelines with PyTorch. Severity is amplified because weights_only=True was the standard recommended mitigation for prior PyTorch deserialization issues — teams who followed best practices still have a false sense of security. Exploitation requires delivering a malicious model file to the target system, feasible via compromised model registries, supply chain attacks, or social engineering. No public exploit code confirmed, but the attack pattern is well-understood.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| scio-pypi | pip | <= 1.0.0 | No patch |
Do you use scio-pypi? You're affected.
Severity & Risk
Recommended Action
6 steps-
Upgrade torch to >= 2.6 (primary fix — resolves the underlying deserialization bypass).
-
Upgrade scio to >= 1.0.1 when released.
-
Audit codebase: grep for torch.load, torch.jit.load, and pickle.load on model files.
-
Enforce model provenance: only load models from cryptographically signed, internally verified sources — treat external model files as untrusted input.
-
Isolate model loading in sandboxed environments (containers with seccomp/AppArmor) as defense-in-depth.
-
Monitor for anomalous process spawning or network connections during model load operations.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is GHSA-m9mp-6x32-5rhg?
Critical deserialization flaw in PyTorch allows arbitrary code execution even when using the supposedly safe weights_only=True flag — the exact control teams rely on to safely load untrusted models. Any ML pipeline using scio <= 1.0.0 or torch <= 2.5.1 is exposed, regardless of whether they followed previous PyTorch hardening guidance. Upgrade torch to >= 2.6 immediately and audit all torch.load call sites in your codebase.
Is GHSA-m9mp-6x32-5rhg actively exploited?
No confirmed active exploitation of GHSA-m9mp-6x32-5rhg has been reported, but organizations should still patch proactively.
How to fix GHSA-m9mp-6x32-5rhg?
1. Upgrade torch to >= 2.6 (primary fix — resolves the underlying deserialization bypass). 2. Upgrade scio to >= 1.0.1 when released. 3. Audit codebase: grep for torch.load, torch.jit.load, and pickle.load on model files. 4. Enforce model provenance: only load models from cryptographically signed, internally verified sources — treat external model files as untrusted input. 5. Isolate model loading in sandboxed environments (containers with seccomp/AppArmor) as defense-in-depth. 6. Monitor for anomalous process spawning or network connections during model load operations.
What systems are affected by GHSA-m9mp-6x32-5rhg?
This vulnerability affects the following AI/ML architecture patterns: model serving, training pipelines, MLOps/CI-CD pipelines, model registries, scientific ML workloads.
What is the CVSS score for GHSA-m9mp-6x32-5rhg?
No CVSS score has been assigned yet.
Technical Details
NVD Description
### Impact PyTorch reported a [**critical** vulnerability](https://github.com/pytorch/pytorch/security/advisories/GHSA-53q9-r3pm-6pq6) when using `torch.load`, even with option `weights_only=True`, for `torch <= 2.5.1`. In `scio <= 1.0.0`, the lower bound for `torch` is `2.3`. ### Patches The lower bound was changed to `torch >= 2.6`, starting from `scio >= 1.0.1` (currently in dev state). ### Workarounds You can manually check that you are using `torch >= 2.6`.
Exploitation Scenario
An adversary crafts a malicious .pt model file embedding a Python pickle payload that spawns a reverse shell or exfiltrates environment variables (API keys, cloud credentials). They publish it to a public model hub or compromise an internal model registry. When a data scientist or automated ML pipeline runs torch.load(malicious_model, weights_only=True) — believing the safety flag protects them — the deserialization bypass executes the payload with the process's full privileges, achieving RCE on the ML training or inference server.
Weaknesses (CWE)
References
Timeline
Related Vulnerabilities
CVE-2025-59528 10.0 Flowise: Unauthenticated RCE via MCP config injection
Same attack type: Supply Chain CVE-2024-2912 10.0 BentoML: RCE via insecure deserialization (CVSS 10)
Same attack type: Supply Chain CVE-2023-3765 10.0 MLflow: path traversal allows arbitrary file read
Same attack type: Supply Chain CVE-2025-5120 10.0 smolagents: sandbox escape enables unauthenticated RCE
Same attack type: Supply Chain CVE-2026-21858 10.0 n8n: Input Validation flaw enables exploitation
Same attack type: Code Execution
AI Threat Alert