CVE-2024-11392: HuggingFace Transformers: RCE via config deserialization
GHSA-qxrp-vhvm-j765 HIGH PoC AVAILABLEAny team running Hugging Face Transformers below 4.48.0 is exposed to full RCE if a user loads a malicious model config file — a routine action in ML workflows. With EPSS at ~55%, exploitation probability is high; patch immediately. Audit all model sources your team loads: HuggingFace Hub, shared drives, or third-party repositories are all potential delivery vectors.
Risk Assessment
High risk for organizations with active ML engineering teams. CVSS 8.8 combined with EPSS ~55% signals realistic near-term exploitation. The attack requires user interaction (loading a malicious config), but this is indistinguishable from normal ML workflows where engineers routinely call AutoConfig.from_pretrained() or load_config() from external sources. Transformers is one of the most deployed ML libraries globally, making the blast radius enormous. Not in CISA KEV yet, but supply-chain delivery via HuggingFace Hub makes silent compromise plausible.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| transformers | pip | — | No patch |
| 160.4K
OpenSSF 4.9 7.9K dependents
Pushed yesterday 39% patched
~101d to patch
Full package profile →
| |||
| transformers | pip | >= 0, < 4.48.0 | 4.48.0 |
| 160.4K
OpenSSF 4.9 7.9K dependents
Pushed yesterday 39% patched
~101d to patch
Full package profile →
| |||
Severity & Risk
Attack Surface
Recommended Action
6 steps-
IMMEDIATE
Upgrade transformers to >= 4.48.0 across all environments (dev, staging, prod, CI/CD).
-
Audit all model and config loading: identify every from_pretrained() call and its source.
-
Allowlist trusted model sources; block loading configs from arbitrary URLs or unapproved HuggingFace repositories.
-
Run pip audit and dependency scanners in CI pipelines to catch transitive exposure.
-
Detection: monitor for unexpected child process spawning from Python processes (especially GPU workers or inference servers).
-
Workaround if patching is delayed: load only locally-stored, checksummed configs and avoid loading configs from remote sources or untrusted parties.
CISA SSVC Assessment
Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2024-11392?
Any team running Hugging Face Transformers below 4.48.0 is exposed to full RCE if a user loads a malicious model config file — a routine action in ML workflows. With EPSS at ~55%, exploitation probability is high; patch immediately. Audit all model sources your team loads: HuggingFace Hub, shared drives, or third-party repositories are all potential delivery vectors.
Is CVE-2024-11392 actively exploited?
Proof-of-concept exploit code is publicly available for CVE-2024-11392, increasing the risk of exploitation.
How to fix CVE-2024-11392?
1. IMMEDIATE: Upgrade transformers to >= 4.48.0 across all environments (dev, staging, prod, CI/CD). 2. Audit all model and config loading: identify every from_pretrained() call and its source. 3. Allowlist trusted model sources; block loading configs from arbitrary URLs or unapproved HuggingFace repositories. 4. Run pip audit and dependency scanners in CI pipelines to catch transitive exposure. 5. Detection: monitor for unexpected child process spawning from Python processes (especially GPU workers or inference servers). 6. Workaround if patching is delayed: load only locally-stored, checksummed configs and avoid loading configs from remote sources or untrusted parties.
What systems are affected by CVE-2024-11392?
This vulnerability affects the following AI/ML architecture patterns: training pipelines, model serving, fine-tuning workflows, MLOps platforms, data science environments.
What is the CVSS score for CVE-2024-11392?
CVE-2024-11392 has a CVSS v3.1 base score of 8.8 (HIGH). The EPSS exploitation probability is 59.29%.
Technical Details
NVD Description
Hugging Face Transformers MobileViTV2 Deserialization of Untrusted Data Remote Code Execution Vulnerability. This vulnerability allows remote attackers to execute arbitrary code on affected installations of Hugging Face Transformers. User interaction is required to exploit this vulnerability in that the target must visit a malicious page or open a malicious file. The specific flaw exists within the handling of configuration files. The issue results from the lack of proper validation of user-supplied data, which can result in deserialization of untrusted data. An attacker can leverage this vulnerability to execute code in the context of the current user. Was ZDI-CAN-24322.
Exploitation Scenario
An adversary publishes a weaponized MobileViTV2 model on HuggingFace Hub with a malicious serialized configuration file. They promote it via forums, GitHub issues, or ML community channels as a performance-optimized checkpoint. A data scientist or ML engineer runs AutoConfig.from_pretrained('attacker/malicious-mobilevitv2') or opens a shared config.json file received via Slack. During deserialization, the config triggers arbitrary code execution — dropping a reverse shell, exfiltrating API keys from environment variables, or pivoting to connected GPU infrastructure and model registries. The attack is invisible: the model may appear to load and run correctly while the payload executes in the background.
Weaknesses (CWE)
CVSS Vector
CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H References
- github.com/advisories/GHSA-qxrp-vhvm-j765
- github.com/huggingface/transformers/issues/34840
- github.com/huggingface/transformers/pull/35296
- github.com/pypa/advisory-database/tree/main/vulns/transformers/PYSEC-2024-227.yaml
- nvd.nist.gov/vuln/detail/CVE-2024-11392
- zerodayinitiative.com/advisories/ZDI-24-1513
- github.com/ARPSyndicate/cve-scores Exploit
- github.com/Kwaai-AI-Lab/OpenAI-Petal Exploit
- github.com/NVIDIA-AI-Blueprints/video-search-and-summarization Exploit
- github.com/PLENOBot/pleno-video-analyser Exploit
- github.com/Piyush-Bhor/CVE-2024-11392 Exploit
- github.com/kshartman/voicemail-transcriber Exploit
- github.com/nomi-sec/PoC-in-GitHub Exploit
- zerodayinitiative.com/advisories/ZDI-24-1513/ 3rd Party VDB
Timeline
Related Vulnerabilities
CVE-2024-3568 9.6 HuggingFace Transformers: RCE via pickle deserialization
Same package: transformers CVE-2024-11394 8.8 Transformers: RCE via Trax model deserialization
Same package: transformers CVE-2023-6730 8.8 HuggingFace Transformers: RCE via unsafe deserialization
Same package: transformers CVE-2024-11393 8.8 Transformers: RCE via MaskFormer model deserialization
Same package: transformers CVE-2023-7018 7.8 Transformers: unsafe deserialization enables RCE on load
Same package: transformers
AI Threat Alert