GHSA-wccx-j62j-r448: fickling: Protection Bypass circumvents security controls
GHSA-wccx-j62j-r448 HIGHIf your ML pipeline uses fickling's always_check_safety() as a security control for model deserialization, it is broken — attackers can bypass it entirely by targeting pickle.loads or _pickle.loads entry points. Patch fickling to 0.1.9 immediately and treat any model loaded via fickling <= 0.1.8 as potentially compromised. Do not rely on fickling as a sole defense layer; add upstream model provenance controls.
Risk Assessment
HIGH. The vulnerability completely defeats the advertised security guarantee of fickling's global safety mode. Exploitability is trivial — an attacker simply uses pickle.loads instead of pickle.load. Impact is remote code execution with the privileges of the loading process. Exposure is significant in any ML/AI pipeline that loads external or third-party model files and relies on fickling as a safety gate. Organizations that implemented fickling specifically to mitigate pickle deserialization risk now have a false sense of security, which is arguably worse than no control at all.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| fickling | pip | <= 0.1.8 | 0.1.9 |
Do you use fickling? You're affected.
Severity & Risk
Recommended Action
6 steps-
PATCH
Upgrade fickling to 0.1.9 immediately — pip install 'fickling>=0.1.9'.
-
VERIFY
Confirm patched version hooks pickle.loads and _pickle.loads by running fickling's regression test suite.
-
DEFENSE-IN-DEPTH: Do not rely solely on fickling; implement model provenance controls (cryptographic signatures, hashes, trusted registries).
-
SCAN HISTORY
Audit logs for model loads that occurred while fickling <= 0.1.8 was in use — treat any externally-sourced model as suspect.
-
DETECT
Alert on unexpected subprocess spawning or network calls during model load/deserialization operations.
-
BLOCK
Consider SafeTensors format as a pickle-free alternative for model serialization where possible.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is GHSA-wccx-j62j-r448?
If your ML pipeline uses fickling's always_check_safety() as a security control for model deserialization, it is broken — attackers can bypass it entirely by targeting pickle.loads or _pickle.loads entry points. Patch fickling to 0.1.9 immediately and treat any model loaded via fickling <= 0.1.8 as potentially compromised. Do not rely on fickling as a sole defense layer; add upstream model provenance controls.
Is GHSA-wccx-j62j-r448 actively exploited?
No confirmed active exploitation of GHSA-wccx-j62j-r448 has been reported, but organizations should still patch proactively.
How to fix GHSA-wccx-j62j-r448?
1. PATCH: Upgrade fickling to 0.1.9 immediately — pip install 'fickling>=0.1.9'. 2. VERIFY: Confirm patched version hooks pickle.loads and _pickle.loads by running fickling's regression test suite. 3. DEFENSE-IN-DEPTH: Do not rely solely on fickling; implement model provenance controls (cryptographic signatures, hashes, trusted registries). 4. SCAN HISTORY: Audit logs for model loads that occurred while fickling <= 0.1.8 was in use — treat any externally-sourced model as suspect. 5. DETECT: Alert on unexpected subprocess spawning or network calls during model load/deserialization operations. 6. BLOCK: Consider SafeTensors format as a pickle-free alternative for model serialization where possible.
What systems are affected by GHSA-wccx-j62j-r448?
This vulnerability affects the following AI/ML architecture patterns: training pipelines, model serving, MLOps CI/CD pipelines, model registries, data science notebooks.
What is the CVSS score for GHSA-wccx-j62j-r448?
No CVSS score has been assigned yet.
Technical Details
NVD Description
# Assessment The missing pickle entrypoints `pickle.loads`, `_pickle.loads`, and `_pickle.load` were added to the hook https://github.com/trailofbits/fickling/commit/8c24c6edabceab156cfd41f4d70b650e1cdad1f7. # Original report ## Summary `fickling.always_check_safety()` does not hook all pickle entry points. `pickle.loads`, `_pickle.loads`, and `_pickle.load` remain unprotected, enabling malicious payload execution despite global safety mode being enabled. ## Affected versions `<= 0.1.8` (verified on current upstream HEAD as of 2026-03-03) ## Non-duplication check against published Fickling GHSAs No published advisory covers hook-coverage bypass in `run_hook()`. Existing advisories are blocklist/detection bypasses (runpy, pty, cProfile, marshal/types, builtins, network constructors, OBJ visibility, etc.), not runtime hook coverage parity. ## Root cause `run_hook()` patches only: - `pickle.load` - `pickle.Unpickler` - `_pickle.Unpickler` It does not patch: - `pickle.loads` - `_pickle.load` - `_pickle.loads` ## Reproduction (clean upstream) ```python import io, pickle, _pickle from unittest.mock import patch import fickling from fickling.exception import UnsafeFileError class Payload: def __reduce__(self): import subprocess return (subprocess.Popen, (['echo','BYPASS'],)) data = pickle.dumps(Payload()) fickling.always_check_safety() # Bypass path with patch('subprocess.Popen') as popen_mock: pickle.loads(data) print('bypass sink called?', popen_mock.called) # True # Control path is blocked with patch('subprocess.Popen') as popen_mock: try: pickle.load(io.BytesIO(data)) except UnsafeFileError: pass print('blocked sink called?', popen_mock.called) # False ``` Observed on vulnerable code: - `pickle.loads` executes payload - `pickle.load` is blocked ## Minimal patch diff ```diff --- a/fickling/hook.py +++ b/fickling/hook.py @@ def run_hook(): - pickle.load = loader.load + pickle.load = loader.load + _pickle.load = loader.load + pickle.loads = loader.loads + _pickle.loads = loader.loads ``` ## Validation after patch - `pickle.loads`, `_pickle.loads`, and `_pickle.load` all raise `UnsafeFileError` - sink not called in any path Regression tests added locally: - `test_run_hook_blocks_pickle_loads` - `test_run_hook_blocks__pickle_load_and_loads` in `test/test_security_regressions_20260303.py` ## Impact High-confidence runtime protection bypass for applications that trust `always_check_safety()` as global guard.
Exploitation Scenario
An adversary targets an ML platform that validates user-submitted PyTorch models using fickling.always_check_safety() before serving them. The adversary crafts a malicious .pt file embedding a pickle payload with a __reduce__ method that spawns a reverse shell or exfiltrates credentials. Because the payload is designed to be loaded via pickle.loads (the standard PyTorch load path), fickling's patched pickle.load hook is never triggered. The model passes validation, is deployed to the inference cluster, and executes arbitrary code with the service account's privileges on first inference request. The attack requires no authentication beyond standard model submission access.
Weaknesses (CWE)
References
Timeline
Related Vulnerabilities
GHSA-5hwf-rc88-82xm fickling: Allowlist Bypass evades input filtering
Same package: fickling GHSA-mhc9-48gj-9gp3 fickling: Allowlist Bypass evades input filtering
Same package: fickling GHSA-5cxw-w2xg-2m8h fickling: Allowlist Bypass evades input filtering
Same package: fickling GHSA-r48f-3986-4f9c fickling: Allowlist Bypass evades input filtering
Same package: fickling GHSA-mxhj-88fx-4pcv fickling: security flaw enables exploitation
Same package: fickling
AI Threat Alert