CVE-2025-5173: label-studio-ml: PyTorch .pt deserialization RCE in YOLO loader
GHSA-55g9-6c2x-gf8q HIGHA malicious PyTorch model file (.pt) loaded by the YOLO integration in label-studio-ml-backend triggers arbitrary code execution via Python pickle deserialization — the classic unsafe `torch.load()` pattern. If your ML teams use Label Studio for data labeling with YOLO models sourced from any external or shared location, treat this as a supply chain risk and restrict model file provenance immediately. No patch is available; apply compensating controls now.
Risk Assessment
CVSS 7.8 with local attack vector reduces immediate internet-exposed risk, but in shared ML environments (multi-user Label Studio deployments, CI/CD pipelines, or Jupyter-adjacent workflows) the 'local' requirement is trivially met by any authenticated user or compromised upstream model source. The absence of a patch and the rolling-release model with no versioned fix compounds the exposure. EPSS 0.001 reflects current in-the-wild activity, not potential — PyTorch pickle exploits are well-documented and tooling exists. Elevated risk for organizations running data labeling pipelines with externally-sourced models.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| label-studio-ml | pip | <= 1.0.9 | No patch |
| label_studio_ml_backend | — | — | No patch |
Severity & Risk
Attack Surface
Recommended Action
7 steps-
IMMEDIATE
Audit all .pt model files loaded by label-studio-ml-backend — verify SHA checksums against known-good sources.
-
Restrict the
pathargument to an allowlist of trusted directories; block user-controllable file paths reaching theload()function. -
Switch to
torch.load(..., weights_only=True)(PyTorch ≥1.13) to disable arbitrary code execution during deserialization — this is the correct long-term fix the vendor should implement. -
Run the ML backend in an isolated container with no access to sensitive credentials or internal networks (defense in depth).
-
Monitor for unexpected process spawning from the label-studio-ml-backend process.
-
If using pip: pin label-studio-ml to a reviewed commit hash until a fix is released; watch GitHub issue #765 for patch status.
-
Do not load model files from untrusted public repositories (HuggingFace, public S3 buckets) without verification.
CISA SSVC Assessment
Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2025-5173?
A malicious PyTorch model file (.pt) loaded by the YOLO integration in label-studio-ml-backend triggers arbitrary code execution via Python pickle deserialization — the classic unsafe `torch.load()` pattern. If your ML teams use Label Studio for data labeling with YOLO models sourced from any external or shared location, treat this as a supply chain risk and restrict model file provenance immediately. No patch is available; apply compensating controls now.
Is CVE-2025-5173 actively exploited?
No confirmed active exploitation of CVE-2025-5173 has been reported, but organizations should still patch proactively.
How to fix CVE-2025-5173?
1. IMMEDIATE: Audit all .pt model files loaded by label-studio-ml-backend — verify SHA checksums against known-good sources. 2. Restrict the `path` argument to an allowlist of trusted directories; block user-controllable file paths reaching the `load()` function. 3. Switch to `torch.load(..., weights_only=True)` (PyTorch ≥1.13) to disable arbitrary code execution during deserialization — this is the correct long-term fix the vendor should implement. 4. Run the ML backend in an isolated container with no access to sensitive credentials or internal networks (defense in depth). 5. Monitor for unexpected process spawning from the label-studio-ml-backend process. 6. If using pip: pin label-studio-ml to a reviewed commit hash until a fix is released; watch GitHub issue #765 for patch status. 7. Do not load model files from untrusted public repositories (HuggingFace, public S3 buckets) without verification.
What systems are affected by CVE-2025-5173?
This vulnerability affects the following AI/ML architecture patterns: ML data labeling pipelines, Model serving, Training pipelines, MLOps/CI-CD pipelines.
What is the CVSS score for CVE-2025-5173?
CVE-2025-5173 has a CVSS v3.1 base score of 7.8 (HIGH). The EPSS exploitation probability is 0.10%.
Technical Details
NVD Description
A vulnerability has been found in HumanSignal label-studio-ml-backend up to 9fb7f4aa186612806af2becfb621f6ed8d9fdbaf and classified as problematic. Affected by this vulnerability is the function load of the file label-studio-ml-backend/label_studio_ml/examples/yolo/utils/neural_nets.py of the component PT File Handler. The manipulation of the argument path leads to deserialization. An attack has to be approached locally. This product takes the approach of rolling releases to provide continious delivery. Therefore, version details for affected and updated releases are not available.
Exploitation Scenario
An attacker with access to the model file path (e.g., a malicious insider, a compromised model registry, or an adversary who has already gained write access to a shared filesystem) crafts a malicious PyTorch .pt file using Python's pickle module to embed a reverse shell payload. When a labeling engineer loads or restarts the YOLO ML backend with this model file, `torch.load()` deserializes the pickle object and executes the attacker's code with the backend process privileges. In a CI/CD context where models are pulled from a shared artifact store before training runs, this becomes a supply-chain vector that triggers automatically without user interaction beyond routine pipeline execution.
Weaknesses (CWE)
CVSS Vector
CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H References
- github.com/HumanSignal/label-studio-ml-backend/issues/765 Issue Vendor
- vuldb.com Permissions Required VDB
- vuldb.com 3rd Party VDB
- vuldb.com 3rd Party VDB
- github.com/advisories/GHSA-55g9-6c2x-gf8q
- nvd.nist.gov/vuln/detail/CVE-2025-5173
Timeline
Related Vulnerabilities
CVE-2025-25297 8.6 Label Studio: SSRF via S3 endpoint exposes internal services
Same package: label-studio CVE-2022-36551 6.5 Label Studio: SSRF + file read, self-reg bypass
Same package: label-studio CVE-2025-25296 6.1 Label Studio: reflected XSS via label_config param
Same package: label-studio CVE-2025-47783 Label Studio: XSS enables unauthorized actions via CSRF
Same package: label-studio CVE-2026-22033 label-studio: XSS enables session hijacking
Same package: label-studio
AI Threat Alert