CVE-2024-37053: MLflow: RCE via malicious scikit-learn model deserialization

HIGH PoC AVAILABLE
Published June 4, 2024
CISO Take

Any MLflow deployment where untrusted users can upload models is a full RCE vector — no authentication needed to upload, just get a victim to load the model. Audit who has write access to your MLflow model registry immediately and enforce model signing or allowlisting before loading external artifacts. If your data science teams pull models from shared or public registries without verification, assume compromise risk is active.

Risk Assessment

High risk for organizations running MLflow with open or loosely controlled model upload permissions. CVSS 8.8 reflects the realistic scenario: low attack complexity, no attacker privileges required, and complete CIA triad impact once triggered. The user interaction requirement (someone loading the model) is trivially satisfied in normal ML workflows where engineers routinely load models from the registry. Exposure is amplified in multi-tenant MLflow deployments, shared experiment environments, and CI/CD pipelines that auto-load models for evaluation.

Affected Systems

Package Ecosystem Vulnerable Range Patched
mlflow pip No patch
25.7K OpenSSF 4.5 624 dependents Pushed 7d ago 24% patched ~64d to patch Full package profile →

Do you use mlflow? You're affected.

Severity & Risk

CVSS 3.1
8.8 / 10
EPSS
0.4%
chance of exploitation in 30 days
Higher than 63% of all CVEs
Exploitation Status
Exploit Available
Exploitation: MEDIUM
Sophistication
Trivial
Exploitation Confidence
medium
Public PoC indexed (trickest/cve)
Composite signal derived from CISA KEV, CISA SSVC, EPSS, trickest/cve, and Nuclei templates.

Attack Surface

AV AC PR UI S C I A
AV Network
AC Low
PR None
UI Required
S Unchanged
C High
I High
A High

Recommended Action

7 steps
  1. PATCH

    Update MLflow to the latest patched version immediately (check HiddenLayer advisory for exact versions).

  2. ACCESS CONTROL

    Restrict model upload permissions to authenticated, authorized users only — enforce RBAC on MLflow server.

  3. MODEL SIGNING

    Implement model signing and verification before any load_model() call in pipelines.

  4. NETWORK ISOLATION

    Run MLflow behind VPN/internal network only; no public-facing exposure.

  5. DETECTION

    Monitor for unexpected process spawning from Python interpreter processes that loaded MLflow models (e.g., curl, bash, nc, reverse shells). Set up EDR alerts on pickle deserialization patterns.

  6. WORKAROUND (if unpatched): Scan model files with tools like fickling (Trail of Bits) to detect malicious pickle payloads before loading.

  7. AUDIT

    Review MLflow audit logs for unexpected model uploads, especially from new or external accounts.

CISA SSVC Assessment

Decision Track
Exploitation none
Automatable No
Technical Impact total

Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Article 15 - Robustness, accuracy and cybersecurity Article 9 - Risk management system
ISO 42001
6.1.2 - AI risk assessment 8.4 - AI system supply chain
NIST AI RMF
MANAGE 2.2 - Mechanisms to sustain the value of deployed AI systems are maintained MAP 5.1 - Likelihood and magnitude of each identified impact based on impacts to individuals, groups, communities, organizations, and society
OWASP LLM Top 10
LLM03:2025 - Supply Chain Vulnerabilities

Frequently Asked Questions

What is CVE-2024-37053?

Any MLflow deployment where untrusted users can upload models is a full RCE vector — no authentication needed to upload, just get a victim to load the model. Audit who has write access to your MLflow model registry immediately and enforce model signing or allowlisting before loading external artifacts. If your data science teams pull models from shared or public registries without verification, assume compromise risk is active.

Is CVE-2024-37053 actively exploited?

Proof-of-concept exploit code is publicly available for CVE-2024-37053, increasing the risk of exploitation.

How to fix CVE-2024-37053?

1. PATCH: Update MLflow to the latest patched version immediately (check HiddenLayer advisory for exact versions). 2. ACCESS CONTROL: Restrict model upload permissions to authenticated, authorized users only — enforce RBAC on MLflow server. 3. MODEL SIGNING: Implement model signing and verification before any load_model() call in pipelines. 4. NETWORK ISOLATION: Run MLflow behind VPN/internal network only; no public-facing exposure. 5. DETECTION: Monitor for unexpected process spawning from Python interpreter processes that loaded MLflow models (e.g., curl, bash, nc, reverse shells). Set up EDR alerts on pickle deserialization patterns. 6. WORKAROUND (if unpatched): Scan model files with tools like fickling (Trail of Bits) to detect malicious pickle payloads before loading. 7. AUDIT: Review MLflow audit logs for unexpected model uploads, especially from new or external accounts.

What systems are affected by CVE-2024-37053?

This vulnerability affects the following AI/ML architecture patterns: Model registries, ML experiment tracking platforms, Training pipelines, MLOps CI/CD pipelines, Model serving (evaluation phase).

What is the CVSS score for CVE-2024-37053?

CVE-2024-37053 has a CVSS v3.1 base score of 8.8 (HIGH). The EPSS exploitation probability is 0.44%.

Technical Details

NVD Description

Deserialization of untrusted data can occur in versions of the MLflow platform running version 1.1.0 or newer, enabling a maliciously uploaded scikit-learn model to run arbitrary code on an end user’s system when interacted with.

Exploitation Scenario

Adversary identifies an organization's MLflow instance (often exposed internally, sometimes publicly). Without requiring existing credentials (PR:N), they register a malicious scikit-learn model file containing a crafted pickle payload — trivially generated with publicly available tools. The payload establishes a reverse shell or downloads a second-stage implant. The adversary then waits or socially engineers a team member to run an evaluation script, trigger a CI/CD pipeline that auto-evaluates new models, or simply browse the experiment in the MLflow UI — any of which calls load_model() and triggers execution. In automated MLOps pipelines, exploitation is fully silent: upload model, wait for the next evaluation cron job, receive shell.

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H

Timeline

Published
June 4, 2024
Last Modified
February 3, 2025
First Seen
June 4, 2024

Related Vulnerabilities