CVE-2024-11392: HuggingFace Transformers: RCE via config deserialization

GHSA-qxrp-vhvm-j765 HIGH PoC AVAILABLE
Published November 22, 2024
CISO Take

Any team running Hugging Face Transformers below 4.48.0 is exposed to full RCE if a user loads a malicious model config file — a routine action in ML workflows. With EPSS at ~55%, exploitation probability is high; patch immediately. Audit all model sources your team loads: HuggingFace Hub, shared drives, or third-party repositories are all potential delivery vectors.

Risk Assessment

High risk for organizations with active ML engineering teams. CVSS 8.8 combined with EPSS ~55% signals realistic near-term exploitation. The attack requires user interaction (loading a malicious config), but this is indistinguishable from normal ML workflows where engineers routinely call AutoConfig.from_pretrained() or load_config() from external sources. Transformers is one of the most deployed ML libraries globally, making the blast radius enormous. Not in CISA KEV yet, but supply-chain delivery via HuggingFace Hub makes silent compromise plausible.

Affected Systems

Package Ecosystem Vulnerable Range Patched
transformers pip No patch
160.4K OpenSSF 4.9 7.9K dependents Pushed yesterday 39% patched ~101d to patch Full package profile →
transformers pip >= 0, < 4.48.0 4.48.0
160.4K OpenSSF 4.9 7.9K dependents Pushed yesterday 39% patched ~101d to patch Full package profile →

Severity & Risk

CVSS 3.1
8.8 / 10
EPSS
59.3%
chance of exploitation in 30 days
Higher than 98% of all CVEs
Exploitation Status
Exploit Available
Exploitation: MEDIUM
Sophistication
Trivial
Exploitation Confidence
medium
Public PoC indexed (trickest/cve)
EPSS exploit prediction: 59%
Composite signal derived from CISA KEV, CISA SSVC, EPSS, trickest/cve, and Nuclei templates.

Attack Surface

AV AC PR UI S C I A
AV Network
AC Low
PR None
UI Required
S Unchanged
C High
I High
A High

Recommended Action

6 steps
  1. IMMEDIATE

    Upgrade transformers to >= 4.48.0 across all environments (dev, staging, prod, CI/CD).

  2. Audit all model and config loading: identify every from_pretrained() call and its source.

  3. Allowlist trusted model sources; block loading configs from arbitrary URLs or unapproved HuggingFace repositories.

  4. Run pip audit and dependency scanners in CI pipelines to catch transitive exposure.

  5. Detection: monitor for unexpected child process spawning from Python processes (especially GPU workers or inference servers).

  6. Workaround if patching is delayed: load only locally-stored, checksummed configs and avoid loading configs from remote sources or untrusted parties.

CISA SSVC Assessment

Decision Track
Exploitation none
Automatable No
Technical Impact total

Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Art.15 - Accuracy, robustness and cybersecurity Art.9 - Risk management system for high-risk AI
ISO 42001
6.1.2 - AI risk assessment 8.4 - AI system lifecycle — acquisition and supply chain
NIST AI RMF
GOVERN-6.1 - Policies and procedures for AI risk in the supply chain MANAGE-2.2 - Mechanisms to address identified AI risks
OWASP LLM Top 10
LLM05:2025 - Insecure Output Handling / Supply Chain Vulnerabilities

Frequently Asked Questions

What is CVE-2024-11392?

Any team running Hugging Face Transformers below 4.48.0 is exposed to full RCE if a user loads a malicious model config file — a routine action in ML workflows. With EPSS at ~55%, exploitation probability is high; patch immediately. Audit all model sources your team loads: HuggingFace Hub, shared drives, or third-party repositories are all potential delivery vectors.

Is CVE-2024-11392 actively exploited?

Proof-of-concept exploit code is publicly available for CVE-2024-11392, increasing the risk of exploitation.

How to fix CVE-2024-11392?

1. IMMEDIATE: Upgrade transformers to >= 4.48.0 across all environments (dev, staging, prod, CI/CD). 2. Audit all model and config loading: identify every from_pretrained() call and its source. 3. Allowlist trusted model sources; block loading configs from arbitrary URLs or unapproved HuggingFace repositories. 4. Run pip audit and dependency scanners in CI pipelines to catch transitive exposure. 5. Detection: monitor for unexpected child process spawning from Python processes (especially GPU workers or inference servers). 6. Workaround if patching is delayed: load only locally-stored, checksummed configs and avoid loading configs from remote sources or untrusted parties.

What systems are affected by CVE-2024-11392?

This vulnerability affects the following AI/ML architecture patterns: training pipelines, model serving, fine-tuning workflows, MLOps platforms, data science environments.

What is the CVSS score for CVE-2024-11392?

CVE-2024-11392 has a CVSS v3.1 base score of 8.8 (HIGH). The EPSS exploitation probability is 59.29%.

Technical Details

NVD Description

Hugging Face Transformers MobileViTV2 Deserialization of Untrusted Data Remote Code Execution Vulnerability. This vulnerability allows remote attackers to execute arbitrary code on affected installations of Hugging Face Transformers. User interaction is required to exploit this vulnerability in that the target must visit a malicious page or open a malicious file. The specific flaw exists within the handling of configuration files. The issue results from the lack of proper validation of user-supplied data, which can result in deserialization of untrusted data. An attacker can leverage this vulnerability to execute code in the context of the current user. Was ZDI-CAN-24322.

Exploitation Scenario

An adversary publishes a weaponized MobileViTV2 model on HuggingFace Hub with a malicious serialized configuration file. They promote it via forums, GitHub issues, or ML community channels as a performance-optimized checkpoint. A data scientist or ML engineer runs AutoConfig.from_pretrained('attacker/malicious-mobilevitv2') or opens a shared config.json file received via Slack. During deserialization, the config triggers arbitrary code execution — dropping a reverse shell, exfiltrating API keys from environment variables, or pivoting to connected GPU infrastructure and model registries. The attack is invisible: the model may appear to load and run correctly while the payload executes in the background.

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H

Timeline

Published
November 22, 2024
Last Modified
February 13, 2025
First Seen
November 22, 2024

Related Vulnerabilities