CVE-2025-54413: skops: RCE via MethodNode unsafe deserialization

GHSA-4v6w-xpmh-gfgp HIGH PoC AVAILABLE CISA: ATTEND
Published July 26, 2025
CISO Take

Any ML pipeline loading skops model files from external or shared sources is exposed to arbitrary code execution at load time. Upgrade to skops 0.12.0 immediately and audit all locations where `.skops` files are ingested. This is a supply chain vector — a malicious model file on HuggingFace or an internal registry is sufficient to compromise the loading environment.

Risk Assessment

High risk for organizations running scikit-learn-based ML pipelines. The vulnerability requires an attacker to deliver a crafted skops model file to a victim who loads it — achievable via supply chain (HuggingFace Hub, S3 buckets, artifact registries) or social engineering. EPSS is currently low, reflecting recency, not real-world risk. The fix is available and the patch delta is concrete, making exploitability moderate-to-low only because the attack surface is limited to skops users. However, for affected environments, the impact is full code execution with the privileges of the loading process.

Affected Systems

Package Ecosystem Vulnerable Range Patched
skops pip < 0.12.0 0.12.0
668 dependents 100% patched ~26d to patch Full package profile →

Do you use skops? You're affected.

Severity & Risk

CVSS 3.1
N/A
EPSS
0.0%
chance of exploitation in 30 days
Higher than 6% of all CVEs
Exploitation Status
Exploit Available
Exploitation: MEDIUM
Sophistication
Moderate
Exploitation Confidence
medium
CISA SSVC: Public PoC
Public PoC indexed (trickest/cve)
Composite signal derived from CISA KEV, CISA SSVC, EPSS, trickest/cve, and Nuclei templates.

Recommended Action

6 steps
  1. Patch immediately

    Upgrade skops to 0.12.0 (pip install --upgrade skops).

  2. Audit ingestion points

    Identify all locations in your pipelines where .skops files are loaded — CI/CD, inference servers, training workers, notebooks.

  3. Verify model provenance

    Implement cryptographic signing or hash verification for model artifacts before loading.

  4. Restrict load sources

    Only allow skops files from internal, controlled registries — block loading from arbitrary URLs or unauthenticated paths.

  5. Sandbox model loading

    Consider loading untrusted models in isolated environments (containers, VMs) with no network access and minimal privileges.

  6. Detection

    Alert on skops.io.load calls in production environments processing externally sourced files; monitor for unexpected process spawning from ML inference workers.

CISA SSVC Assessment

Decision Attend
Exploitation poc
Automatable No
Technical Impact total

Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Article 15 - Accuracy, robustness and cybersecurity
ISO 42001
A.6.2.6 - AI system security in the supply chain
NIST AI RMF
MANAGE 2.2 - Mechanisms to maintain AI system integrity and security MAP 5.1 - Likelihood of AI risks to individuals or groups
OWASP LLM Top 10
LLM05 - Supply Chain Vulnerabilities

Frequently Asked Questions

What is CVE-2025-54413?

Any ML pipeline loading skops model files from external or shared sources is exposed to arbitrary code execution at load time. Upgrade to skops 0.12.0 immediately and audit all locations where `.skops` files are ingested. This is a supply chain vector — a malicious model file on HuggingFace or an internal registry is sufficient to compromise the loading environment.

Is CVE-2025-54413 actively exploited?

Proof-of-concept exploit code is publicly available for CVE-2025-54413, increasing the risk of exploitation.

How to fix CVE-2025-54413?

1. **Patch immediately**: Upgrade skops to 0.12.0 (`pip install --upgrade skops`). 2. **Audit ingestion points**: Identify all locations in your pipelines where `.skops` files are loaded — CI/CD, inference servers, training workers, notebooks. 3. **Verify model provenance**: Implement cryptographic signing or hash verification for model artifacts before loading. 4. **Restrict load sources**: Only allow skops files from internal, controlled registries — block loading from arbitrary URLs or unauthenticated paths. 5. **Sandbox model loading**: Consider loading untrusted models in isolated environments (containers, VMs) with no network access and minimal privileges. 6. **Detection**: Alert on `skops.io.load` calls in production environments processing externally sourced files; monitor for unexpected process spawning from ML inference workers.

What systems are affected by CVE-2025-54413?

This vulnerability affects the following AI/ML architecture patterns: model serving, training pipelines, MLOps/CI-CD pipelines, model registries, data science notebooks.

What is the CVSS score for CVE-2025-54413?

No CVSS score has been assigned yet.

Technical Details

NVD Description

skops is a Python library which helps users share and ship their scikit-learn based models. Versions 0.11.0 and below contain an inconsistency in MethodNode, which can be exploited to access unexpected object fields through dot notation. This can be used to achieve arbitrary code execution at load time. While this issue may seem similar to GHSA-m7f4-hrc6-fwg3, it is actually more severe, as it relies on fewer assumptions about trusted types. This is fixed in version 12.0.0.

Exploitation Scenario

An adversary identifies a target organization using skops for sharing scikit-learn models internally or consuming models from HuggingFace. The adversary crafts a malicious `.skops` model file by abusing the MethodNode inconsistency — using dot notation to traverse unexpected object fields, ultimately triggering arbitrary code execution when the file is deserialized. The attacker uploads the poisoned model to a public HuggingFace repository with a convincing name and README (e.g., a fine-tuned sentiment analysis model for a popular dataset). A data scientist or automated pipeline loads the model, executing attacker-controlled code with the privileges of the loading process. In a CI/CD context, this can lead to secrets exfiltration, lateral movement, or persistent backdoors in ML infrastructure.

Timeline

Published
July 26, 2025
Last Modified
September 5, 2025
First Seen
July 26, 2025

Related Vulnerabilities