CVE-2024-11393: Transformers: RCE via MaskFormer model deserialization

GHSA-wrfc-pvp9-mr9g HIGH PoC AVAILABLE
Published November 22, 2024
CISO Take

Any environment running HuggingFace Transformers < 4.48.0 that loads MaskFormer models is vulnerable to remote code execution — no credentials needed, just loading a malicious model file. With an EPSS of 0.76 (top 3% of all CVEs), exploitation probability is very high; patch immediately or block untrusted model loading. This is a direct supply chain risk: a malicious actor can publish a poisoned model to HuggingFace Hub and trigger RCE when your pipeline loads it.

Risk Assessment

High severity (CVSS 8.8) amplified by an exceptionally high EPSS score of 0.76116, indicating strong real-world exploitation likelihood. The attack requires no privileges and is network-accessible; the only friction is user interaction (loading a model file), which is routine in ML workflows — making the effective barrier extremely low. AI/ML environments that auto-download models from Hub registries without signature verification are at critical risk. Not yet in CISA KEV but EPSS trajectory suggests imminent active exploitation.

Affected Systems

Package Ecosystem Vulnerable Range Patched
transformers pip No patch
160.4K OpenSSF 4.9 7.9K dependents Pushed yesterday 39% patched ~101d to patch Full package profile →
transformers pip >= 0, < 4.48.0 4.48.0
160.4K OpenSSF 4.9 7.9K dependents Pushed yesterday 39% patched ~101d to patch Full package profile →

Severity & Risk

CVSS 3.1
8.8 / 10
EPSS
79.5%
chance of exploitation in 30 days
Higher than 99% of all CVEs
Exploitation Status
Exploit Available
Exploitation: MEDIUM
Sophistication
Trivial
Exploitation Confidence
medium
Public PoC indexed (trickest/cve)
EPSS exploit prediction: 80%
Composite signal derived from CISA KEV, CISA SSVC, EPSS, trickest/cve, and Nuclei templates.

Attack Surface

AV AC PR UI S C I A
AV Network
AC Low
PR None
UI Required
S Unchanged
C High
I High
A High

Recommended Action

6 steps
  1. PATCH

    Upgrade transformers to >= 4.48.0 immediately (pip install --upgrade transformers).

  2. INVENTORY

    Identify all services, pipelines, and containers using transformers < 4.48.0 (grep requirements.txt, Pipfile, pyproject.toml, Dockerfiles).

  3. RESTRICT

    Enforce model loading only from internal, verified model registries; disable arbitrary model pulls from public Hub in production.

  4. VERIFY

    Implement model signature verification or hash pinning before loading any external model artifact.

  5. DETECT

    Monitor for unexpected network connections or process spawning from Python ML processes.

  6. SCAN

    Run dependency audits (pip-audit, safety) across all ML environments. Workaround if patching is blocked: whitelist approved model IDs and validate against known-good checksums before deserialization.

CISA SSVC Assessment

Decision Track
Exploitation none
Automatable No
Technical Impact total

Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Article 15 - Accuracy, robustness and cybersecurity Article 9 - Risk management system
ISO 42001
A.6.1.4 - AI supply chain security A.9.4 - Protection of AI system resources
NIST AI RMF
GOVERN 1.7 - Processes for AI risk management MANAGE 2.2 - Mechanisms for AI risk treatment
OWASP LLM Top 10
LLM03:2025 - Supply Chain Vulnerabilities

Frequently Asked Questions

What is CVE-2024-11393?

Any environment running HuggingFace Transformers < 4.48.0 that loads MaskFormer models is vulnerable to remote code execution — no credentials needed, just loading a malicious model file. With an EPSS of 0.76 (top 3% of all CVEs), exploitation probability is very high; patch immediately or block untrusted model loading. This is a direct supply chain risk: a malicious actor can publish a poisoned model to HuggingFace Hub and trigger RCE when your pipeline loads it.

Is CVE-2024-11393 actively exploited?

Proof-of-concept exploit code is publicly available for CVE-2024-11393, increasing the risk of exploitation.

How to fix CVE-2024-11393?

1. PATCH: Upgrade transformers to >= 4.48.0 immediately (pip install --upgrade transformers). 2. INVENTORY: Identify all services, pipelines, and containers using transformers < 4.48.0 (grep requirements.txt, Pipfile, pyproject.toml, Dockerfiles). 3. RESTRICT: Enforce model loading only from internal, verified model registries; disable arbitrary model pulls from public Hub in production. 4. VERIFY: Implement model signature verification or hash pinning before loading any external model artifact. 5. DETECT: Monitor for unexpected network connections or process spawning from Python ML processes. 6. SCAN: Run dependency audits (pip-audit, safety) across all ML environments. Workaround if patching is blocked: whitelist approved model IDs and validate against known-good checksums before deserialization.

What systems are affected by CVE-2024-11393?

This vulnerability affects the following AI/ML architecture patterns: model serving, training pipelines, MLOps automation pipelines, computer vision inference systems, model evaluation workflows.

What is the CVSS score for CVE-2024-11393?

CVE-2024-11393 has a CVSS v3.1 base score of 8.8 (HIGH). The EPSS exploitation probability is 79.53%.

Technical Details

NVD Description

Hugging Face Transformers MaskFormer Model Deserialization of Untrusted Data Remote Code Execution Vulnerability. This vulnerability allows remote attackers to execute arbitrary code on affected installations of Hugging Face Transformers. User interaction is required to exploit this vulnerability in that the target must visit a malicious page or open a malicious file. The specific flaw exists within the parsing of model files. The issue results from the lack of proper validation of user-supplied data, which can result in deserialization of untrusted data. An attacker can leverage this vulnerability to execute code in the context of the current user. Was ZDI-CAN-25191.

Exploitation Scenario

An adversary crafts a malicious MaskFormer model file containing a serialized Python pickle payload that executes arbitrary OS commands upon deserialization. They publish this model to HuggingFace Hub under a convincing name (e.g., 'facebook/maskformer-swin-large-ade-optimized'). A data science team or automated MLOps pipeline calls from_pretrained() on the malicious model ID — a completely standard operation. The pickle payload fires during model loading, granting the attacker a reverse shell in the ML worker's process context, with access to training data, API keys in environment variables, cloud credentials, and internal network segments. No special privileges or prior access required.

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H

Timeline

Published
November 22, 2024
Last Modified
February 13, 2025
First Seen
November 22, 2024

Related Vulnerabilities