CVE-2024-11393: Transformers: RCE via MaskFormer model deserialization
GHSA-wrfc-pvp9-mr9g HIGH PoC AVAILABLEAny environment running HuggingFace Transformers < 4.48.0 that loads MaskFormer models is vulnerable to remote code execution — no credentials needed, just loading a malicious model file. With an EPSS of 0.76 (top 3% of all CVEs), exploitation probability is very high; patch immediately or block untrusted model loading. This is a direct supply chain risk: a malicious actor can publish a poisoned model to HuggingFace Hub and trigger RCE when your pipeline loads it.
Risk Assessment
High severity (CVSS 8.8) amplified by an exceptionally high EPSS score of 0.76116, indicating strong real-world exploitation likelihood. The attack requires no privileges and is network-accessible; the only friction is user interaction (loading a model file), which is routine in ML workflows — making the effective barrier extremely low. AI/ML environments that auto-download models from Hub registries without signature verification are at critical risk. Not yet in CISA KEV but EPSS trajectory suggests imminent active exploitation.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| transformers | pip | — | No patch |
| 160.4K
OpenSSF 4.9 7.9K dependents
Pushed yesterday 39% patched
~101d to patch
Full package profile →
| |||
| transformers | pip | >= 0, < 4.48.0 | 4.48.0 |
| 160.4K
OpenSSF 4.9 7.9K dependents
Pushed yesterday 39% patched
~101d to patch
Full package profile →
| |||
Severity & Risk
Attack Surface
Recommended Action
6 steps-
PATCH
Upgrade transformers to >= 4.48.0 immediately (pip install --upgrade transformers).
-
INVENTORY
Identify all services, pipelines, and containers using transformers < 4.48.0 (grep requirements.txt, Pipfile, pyproject.toml, Dockerfiles).
-
RESTRICT
Enforce model loading only from internal, verified model registries; disable arbitrary model pulls from public Hub in production.
-
VERIFY
Implement model signature verification or hash pinning before loading any external model artifact.
-
DETECT
Monitor for unexpected network connections or process spawning from Python ML processes.
-
SCAN
Run dependency audits (pip-audit, safety) across all ML environments. Workaround if patching is blocked: whitelist approved model IDs and validate against known-good checksums before deserialization.
CISA SSVC Assessment
Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2024-11393?
Any environment running HuggingFace Transformers < 4.48.0 that loads MaskFormer models is vulnerable to remote code execution — no credentials needed, just loading a malicious model file. With an EPSS of 0.76 (top 3% of all CVEs), exploitation probability is very high; patch immediately or block untrusted model loading. This is a direct supply chain risk: a malicious actor can publish a poisoned model to HuggingFace Hub and trigger RCE when your pipeline loads it.
Is CVE-2024-11393 actively exploited?
Proof-of-concept exploit code is publicly available for CVE-2024-11393, increasing the risk of exploitation.
How to fix CVE-2024-11393?
1. PATCH: Upgrade transformers to >= 4.48.0 immediately (pip install --upgrade transformers). 2. INVENTORY: Identify all services, pipelines, and containers using transformers < 4.48.0 (grep requirements.txt, Pipfile, pyproject.toml, Dockerfiles). 3. RESTRICT: Enforce model loading only from internal, verified model registries; disable arbitrary model pulls from public Hub in production. 4. VERIFY: Implement model signature verification or hash pinning before loading any external model artifact. 5. DETECT: Monitor for unexpected network connections or process spawning from Python ML processes. 6. SCAN: Run dependency audits (pip-audit, safety) across all ML environments. Workaround if patching is blocked: whitelist approved model IDs and validate against known-good checksums before deserialization.
What systems are affected by CVE-2024-11393?
This vulnerability affects the following AI/ML architecture patterns: model serving, training pipelines, MLOps automation pipelines, computer vision inference systems, model evaluation workflows.
What is the CVSS score for CVE-2024-11393?
CVE-2024-11393 has a CVSS v3.1 base score of 8.8 (HIGH). The EPSS exploitation probability is 79.53%.
Technical Details
NVD Description
Hugging Face Transformers MaskFormer Model Deserialization of Untrusted Data Remote Code Execution Vulnerability. This vulnerability allows remote attackers to execute arbitrary code on affected installations of Hugging Face Transformers. User interaction is required to exploit this vulnerability in that the target must visit a malicious page or open a malicious file. The specific flaw exists within the parsing of model files. The issue results from the lack of proper validation of user-supplied data, which can result in deserialization of untrusted data. An attacker can leverage this vulnerability to execute code in the context of the current user. Was ZDI-CAN-25191.
Exploitation Scenario
An adversary crafts a malicious MaskFormer model file containing a serialized Python pickle payload that executes arbitrary OS commands upon deserialization. They publish this model to HuggingFace Hub under a convincing name (e.g., 'facebook/maskformer-swin-large-ade-optimized'). A data science team or automated MLOps pipeline calls from_pretrained() on the malicious model ID — a completely standard operation. The pickle payload fires during model loading, granting the attacker a reverse shell in the ML worker's process context, with access to training data, API keys in environment variables, cloud credentials, and internal network segments. No special privileges or prior access required.
Weaknesses (CWE)
CVSS Vector
CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H References
- github.com/advisories/GHSA-wrfc-pvp9-mr9g
- github.com/huggingface/transformers/issues/34840
- github.com/huggingface/transformers/pull/35296
- github.com/pypa/advisory-database/tree/main/vulns/transformers/PYSEC-2024-228.yaml
- nvd.nist.gov/vuln/detail/CVE-2024-11393
- zerodayinitiative.com/advisories/ZDI-24-1514
- github.com/Kwaai-AI-Lab/OpenAI-Petal Exploit
- github.com/NVIDIA-AI-Blueprints/video-search-and-summarization Exploit
- github.com/PLENOBot/pleno-video-analyser Exploit
- github.com/Piyush-Bhor/CVE-2024-11393 Exploit
- github.com/nomi-sec/PoC-in-GitHub Exploit
- zerodayinitiative.com/advisories/ZDI-24-1514/ 3rd Party VDB
Timeline
Related Vulnerabilities
CVE-2024-3568 9.6 HuggingFace Transformers: RCE via pickle deserialization
Same package: transformers CVE-2024-11394 8.8 Transformers: RCE via Trax model deserialization
Same package: transformers CVE-2023-6730 8.8 HuggingFace Transformers: RCE via unsafe deserialization
Same package: transformers CVE-2024-11392 8.8 HuggingFace Transformers: RCE via config deserialization
Same package: transformers CVE-2023-7018 7.8 Transformers: unsafe deserialization enables RCE on load
Same package: transformers
AI Threat Alert