CVE-2023-6730: HuggingFace Transformers: RCE via unsafe deserialization
GHSA-3863-2447-669p HIGH PoC AVAILABLEAny team loading HuggingFace models via the transformers library before 4.36.0 is exposed to remote code execution — triggered simply by loading a malicious model file. Patch to 4.36.0 immediately and audit all model-loading pipelines for untrusted sources. This is a supply-chain RCE vector that bypasses application-layer controls entirely.
Risk Assessment
HIGH operational risk for organizations with active ML pipelines. CVSS 8.8 with network vector, low complexity, and low privilege requirements makes exploitation straightforward once an attacker can position a malicious model file in the loading path. EPSS is currently low (0.16%) suggesting limited automated scanning, but the technique is well-understood by the offensive community and requires no AI expertise. Exposure scales with how many teams load models from shared registries, CI/CD pipelines, or external sources.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| transformers | pip | — | No patch |
| 160.4K
OpenSSF 4.9 7.9K dependents
Pushed yesterday 39% patched
~101d to patch
Full package profile →
| |||
| transformers | pip | < 4.36.0 | 4.36.0 |
| 160.4K
OpenSSF 4.9 7.9K dependents
Pushed yesterday 39% patched
~101d to patch
Full package profile →
| |||
Severity & Risk
Attack Surface
Recommended Action
6 steps-
PATCH
Upgrade transformers to >= 4.36.0 immediately. Verify via
pip show transformers. -
AUDIT
Inventory all code calling from_pretrained() or equivalent — flag any loading from non-verified sources.
-
ARTIFACT TRUST
Enforce model hash pinning (SHA256) for all production model pulls; reject unsigned or unverified model files.
-
FORMAT
Migrate to safetensors format where possible — it eliminates pickle-based deserialization entirely.
-
ISOLATION
Run model loading in sandboxed environments (containers with no network egress, minimal filesystem permissions).
-
DETECT
Alert on unexpected outbound connections from model-serving processes post-load.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2023-6730?
Any team loading HuggingFace models via the transformers library before 4.36.0 is exposed to remote code execution — triggered simply by loading a malicious model file. Patch to 4.36.0 immediately and audit all model-loading pipelines for untrusted sources. This is a supply-chain RCE vector that bypasses application-layer controls entirely.
Is CVE-2023-6730 actively exploited?
Proof-of-concept exploit code is publicly available for CVE-2023-6730, increasing the risk of exploitation.
How to fix CVE-2023-6730?
1. PATCH: Upgrade transformers to >= 4.36.0 immediately. Verify via `pip show transformers`. 2. AUDIT: Inventory all code calling from_pretrained() or equivalent — flag any loading from non-verified sources. 3. ARTIFACT TRUST: Enforce model hash pinning (SHA256) for all production model pulls; reject unsigned or unverified model files. 4. FORMAT: Migrate to safetensors format where possible — it eliminates pickle-based deserialization entirely. 5. ISOLATION: Run model loading in sandboxed environments (containers with no network egress, minimal filesystem permissions). 6. DETECT: Alert on unexpected outbound connections from model-serving processes post-load.
What systems are affected by CVE-2023-6730?
This vulnerability affects the following AI/ML architecture patterns: training pipelines, model serving, fine-tuning pipelines, RAG pipelines, agent frameworks, MLOps CI/CD pipelines.
What is the CVSS score for CVE-2023-6730?
CVE-2023-6730 has a CVSS v3.1 base score of 8.8 (HIGH). The EPSS exploitation probability is 0.16%.
Technical Details
NVD Description
Deserialization of Untrusted Data in GitHub repository huggingface/transformers prior to 4.36.
Exploitation Scenario
Adversary creates a malicious HuggingFace model repository with a crafted pickle payload embedded in the model weights file (e.g., pytorch_model.bin). The payload establishes a reverse shell or exfiltrates credentials upon deserialization. The attacker promotes the repository via SEO, GitHub stars, or direct targeting of a victim org's model sourcing workflow. When a developer or automated MLOps pipeline runs `AutoModel.from_pretrained('attacker/malicious-model')`, the payload executes in the context of the training or inference server — often with cloud credentials or internal network access. No interaction beyond the model pull is required.
Weaknesses (CWE)
CVSS Vector
CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H References
- github.com/advisories/GHSA-3863-2447-669p
- github.com/pypa/advisory-database/tree/main/vulns/transformers/PYSEC-2023-300.yaml
- nvd.nist.gov/vuln/detail/CVE-2023-6730
- github.com/huggingface/transformers/commit/1d63b0ec361e7a38f1339385e8a5a855085532ce Patch
- huntr.com/bounties/423611ee-7a2a-442a-babb-3ed2f8385c16 Exploit
- github.com/MLegkovskis/tiny-llm-cicd Exploit
- github.com/a4abdul7/mlops Exploit
Timeline
Related Vulnerabilities
CVE-2024-3568 9.6 HuggingFace Transformers: RCE via pickle deserialization
Same package: transformers CVE-2024-11394 8.8 Transformers: RCE via Trax model deserialization
Same package: transformers CVE-2024-11393 8.8 Transformers: RCE via MaskFormer model deserialization
Same package: transformers CVE-2024-11392 8.8 HuggingFace Transformers: RCE via config deserialization
Same package: transformers CVE-2023-7018 7.8 Transformers: unsafe deserialization enables RCE on load
Same package: transformers
AI Threat Alert