CVE-2024-11394: Transformers: RCE via Trax model deserialization

GHSA-hxxf-235m-72v3 HIGH PoC AVAILABLE
Published November 22, 2024
CISO Take

Any team loading Trax model files through Hugging Face Transformers < 4.48.0 is exposed to remote code execution — including automated MLOps pipelines that pull from model hubs. Patch to 4.48.0 immediately and audit every model-loading path in your ML infrastructure. Treat untrusted model files the same way you treat untrusted executables.

Risk Assessment

CVSS 8.8 with ~59% EPSS indicates meaningful real-world exploitation probability. While user interaction is required, this bar is trivially cleared via social engineering, compromised model repositories, or automated pipelines that pull community models without verification. The attack requires no privileges and executes under the context of the loading process — in MLOps environments this is often a privileged service account with broad data/infrastructure access. Impact is amplified because Transformers is one of the most widely deployed ML libraries globally.

Affected Systems

Package Ecosystem Vulnerable Range Patched
transformers pip No patch
160.4K OpenSSF 4.9 7.9K dependents Pushed yesterday 39% patched ~101d to patch Full package profile →
transformers pip >= 0, < 4.48.0 4.48.0
160.4K OpenSSF 4.9 7.9K dependents Pushed yesterday 39% patched ~101d to patch Full package profile →

Severity & Risk

CVSS 3.1
8.8 / 10
EPSS
65.0%
chance of exploitation in 30 days
Higher than 98% of all CVEs
Exploitation Status
Exploit Available
Exploitation: MEDIUM
Sophistication
Trivial
Exploitation Confidence
medium
Public PoC indexed (trickest/cve)
EPSS exploit prediction: 65%
Composite signal derived from CISA KEV, CISA SSVC, EPSS, trickest/cve, and Nuclei templates.

Attack Surface

AV AC PR UI S C I A
AV Network
AC Low
PR None
UI Required
S Unchanged
C High
I High
A High

Recommended Action

7 steps
  1. Patch immediately: upgrade transformers to >= 4.48.0 across all environments (training, inference, CI/CD).

  2. Audit model provenance: inventory all locations where Trax model files are loaded and verify each source is trusted.

  3. Restrict model loading: enforce allowlists for model sources; block loading from arbitrary URLs or unvetted user uploads.

  4. Implement model signing: use cryptographic signatures (e.g., Sigstore/cosign) to verify model integrity before loading.

  5. Apply least privilege: run model-loading processes with minimal OS permissions and network access, ideally in sandboxed containers.

  6. Monitor for exploitation: alert on unexpected outbound connections or process spawning during model load events.

  7. Review CI/CD pipelines: any automated job that loads Trax models from external registries should be updated before next run.

CISA SSVC Assessment

Decision Track
Exploitation none
Automatable No
Technical Impact total

Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Article 15 - Accuracy, robustness and cybersecurity Article 9 - Risk management system
ISO 42001
A.6.1.3 - AI risk assessment A.8.5 - AI system configuration management
NIST AI RMF
GOVERN 6.1 - Policies and procedures for AI risk across the supply chain MANAGE 2.2 - Mechanisms to sustain safe AI deployment
OWASP LLM Top 10
LLM03:2025 - Supply Chain Vulnerabilities

Frequently Asked Questions

What is CVE-2024-11394?

Any team loading Trax model files through Hugging Face Transformers < 4.48.0 is exposed to remote code execution — including automated MLOps pipelines that pull from model hubs. Patch to 4.48.0 immediately and audit every model-loading path in your ML infrastructure. Treat untrusted model files the same way you treat untrusted executables.

Is CVE-2024-11394 actively exploited?

Proof-of-concept exploit code is publicly available for CVE-2024-11394, increasing the risk of exploitation.

How to fix CVE-2024-11394?

1. Patch immediately: upgrade transformers to >= 4.48.0 across all environments (training, inference, CI/CD). 2. Audit model provenance: inventory all locations where Trax model files are loaded and verify each source is trusted. 3. Restrict model loading: enforce allowlists for model sources; block loading from arbitrary URLs or unvetted user uploads. 4. Implement model signing: use cryptographic signatures (e.g., Sigstore/cosign) to verify model integrity before loading. 5. Apply least privilege: run model-loading processes with minimal OS permissions and network access, ideally in sandboxed containers. 6. Monitor for exploitation: alert on unexpected outbound connections or process spawning during model load events. 7. Review CI/CD pipelines: any automated job that loads Trax models from external registries should be updated before next run.

What systems are affected by CVE-2024-11394?

This vulnerability affects the following AI/ML architecture patterns: training pipelines, model serving, MLOps pipelines, model hub integrations, fine-tuning workflows, research/experimentation environments.

What is the CVSS score for CVE-2024-11394?

CVE-2024-11394 has a CVSS v3.1 base score of 8.8 (HIGH). The EPSS exploitation probability is 65.05%.

Technical Details

NVD Description

Hugging Face Transformers Trax Model Deserialization of Untrusted Data Remote Code Execution Vulnerability. This vulnerability allows remote attackers to execute arbitrary code on affected installations of Hugging Face Transformers. User interaction is required to exploit this vulnerability in that the target must visit a malicious page or open a malicious file. The specific flaw exists within the handling of model files. The issue results from the lack of proper validation of user-supplied data, which can result in deserialization of untrusted data. An attacker can leverage this vulnerability to execute code in the context of the current user. Was ZDI-CAN-25012.

Exploitation Scenario

An adversary publishes a Trax model to HuggingFace Hub under a name resembling a popular legitimate model (typosquatting) or directly contributes a malicious model to an open-source project. A data scientist or automated MLOps pipeline calls `from_pretrained()` or equivalent Trax loading logic, triggering deserialization of the attacker-controlled payload. The embedded malicious code executes in the context of the loading process — typically granting the attacker access to training data, cloud credentials stored in environment variables, GPU cluster credentials, or the ability to backdoor subsequent model outputs. In enterprise environments, this frequently leads to lateral movement via stolen IAM credentials or cloud metadata service access.

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H

Timeline

Published
November 22, 2024
Last Modified
February 13, 2025
First Seen
November 22, 2024

Related Vulnerabilities