CVE-2023-6730: HuggingFace Transformers: RCE via unsafe deserialization

GHSA-3863-2447-669p HIGH PoC AVAILABLE
Published December 19, 2023
CISO Take

Any team loading HuggingFace models via the transformers library before 4.36.0 is exposed to remote code execution — triggered simply by loading a malicious model file. Patch to 4.36.0 immediately and audit all model-loading pipelines for untrusted sources. This is a supply-chain RCE vector that bypasses application-layer controls entirely.

Risk Assessment

HIGH operational risk for organizations with active ML pipelines. CVSS 8.8 with network vector, low complexity, and low privilege requirements makes exploitation straightforward once an attacker can position a malicious model file in the loading path. EPSS is currently low (0.16%) suggesting limited automated scanning, but the technique is well-understood by the offensive community and requires no AI expertise. Exposure scales with how many teams load models from shared registries, CI/CD pipelines, or external sources.

Affected Systems

Package Ecosystem Vulnerable Range Patched
transformers pip No patch
160.4K OpenSSF 4.9 7.9K dependents Pushed yesterday 39% patched ~101d to patch Full package profile →
transformers pip < 4.36.0 4.36.0
160.4K OpenSSF 4.9 7.9K dependents Pushed yesterday 39% patched ~101d to patch Full package profile →

Severity & Risk

CVSS 3.1
8.8 / 10
EPSS
0.2%
chance of exploitation in 30 days
Higher than 37% of all CVEs
Exploitation Status
Exploit Available
Exploitation: MEDIUM
Sophistication
Moderate
Exploitation Confidence
medium
Public PoC indexed (trickest/cve)
Composite signal derived from CISA KEV, CISA SSVC, EPSS, trickest/cve, and Nuclei templates.

Attack Surface

AV AC PR UI S C I A
AV Network
AC Low
PR Low
UI None
S Unchanged
C High
I High
A High

Recommended Action

6 steps
  1. PATCH

    Upgrade transformers to >= 4.36.0 immediately. Verify via pip show transformers.

  2. AUDIT

    Inventory all code calling from_pretrained() or equivalent — flag any loading from non-verified sources.

  3. ARTIFACT TRUST

    Enforce model hash pinning (SHA256) for all production model pulls; reject unsigned or unverified model files.

  4. FORMAT

    Migrate to safetensors format where possible — it eliminates pickle-based deserialization entirely.

  5. ISOLATION

    Run model loading in sandboxed environments (containers with no network egress, minimal filesystem permissions).

  6. DETECT

    Alert on unexpected outbound connections from model-serving processes post-load.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Article 17 - Quality management system — third-party components Article 9 - Risk management system
ISO 42001
6.1.2 - AI risk assessment 8.4 - AI system supply chain
NIST AI RMF
GOVERN 1.2 - Accountability for AI risk MANAGE 2.2 - Treatment of identified AI risks
OWASP LLM Top 10
LLM03:2025 - Supply Chain Vulnerabilities

Frequently Asked Questions

What is CVE-2023-6730?

Any team loading HuggingFace models via the transformers library before 4.36.0 is exposed to remote code execution — triggered simply by loading a malicious model file. Patch to 4.36.0 immediately and audit all model-loading pipelines for untrusted sources. This is a supply-chain RCE vector that bypasses application-layer controls entirely.

Is CVE-2023-6730 actively exploited?

Proof-of-concept exploit code is publicly available for CVE-2023-6730, increasing the risk of exploitation.

How to fix CVE-2023-6730?

1. PATCH: Upgrade transformers to >= 4.36.0 immediately. Verify via `pip show transformers`. 2. AUDIT: Inventory all code calling from_pretrained() or equivalent — flag any loading from non-verified sources. 3. ARTIFACT TRUST: Enforce model hash pinning (SHA256) for all production model pulls; reject unsigned or unverified model files. 4. FORMAT: Migrate to safetensors format where possible — it eliminates pickle-based deserialization entirely. 5. ISOLATION: Run model loading in sandboxed environments (containers with no network egress, minimal filesystem permissions). 6. DETECT: Alert on unexpected outbound connections from model-serving processes post-load.

What systems are affected by CVE-2023-6730?

This vulnerability affects the following AI/ML architecture patterns: training pipelines, model serving, fine-tuning pipelines, RAG pipelines, agent frameworks, MLOps CI/CD pipelines.

What is the CVSS score for CVE-2023-6730?

CVE-2023-6730 has a CVSS v3.1 base score of 8.8 (HIGH). The EPSS exploitation probability is 0.16%.

Technical Details

NVD Description

Deserialization of Untrusted Data in GitHub repository huggingface/transformers prior to 4.36.

Exploitation Scenario

Adversary creates a malicious HuggingFace model repository with a crafted pickle payload embedded in the model weights file (e.g., pytorch_model.bin). The payload establishes a reverse shell or exfiltrates credentials upon deserialization. The attacker promotes the repository via SEO, GitHub stars, or direct targeting of a victim org's model sourcing workflow. When a developer or automated MLOps pipeline runs `AutoModel.from_pretrained('attacker/malicious-model')`, the payload executes in the context of the training or inference server — often with cloud credentials or internal network access. No interaction beyond the model pull is required.

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H

Timeline

Published
December 19, 2023
Last Modified
November 22, 2024
First Seen
December 19, 2023

Related Vulnerabilities