If you use Fickling to gate-check pickle files before loading AI models or datasets, your security control has a blind spot — attackers can craft malicious pickles using ctypes, importlib, runpy, code, or multiprocessing that pass as 'LIKELY_SAFE'. Upgrade fickling to 0.1.7 immediately and re-audit any models validated with prior versions. Until patched, do not rely on Fickling alone as the sole safety control for untrusted pickle artifacts.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| fickling | pip | <= 0.1.6 | 0.1.7 |
Do you use fickling? You're affected.
Severity & Risk
Recommended Action
- 1. PATCH: Upgrade fickling to 0.1.7 immediately (`pip install fickling>=0.1.7`). 2. AUDIT: Identify all pipelines, CI/CD jobs, and model loading workflows that call fickling or check_safety() — re-validate any models previously cleared by older versions. 3. DEFENSE-IN-DEPTH: Do not rely on Fickling alone — layer with network egress controls, sandboxed model loading (subprocess isolation, containers with no credentials), and model signing/provenance verification. 4. DETECT: Alert on any process spawned from Python pickle deserialization code; monitor for unexpected ctypes, subprocess, or socket calls from ML worker processes. 5. POLICY: For models from untrusted sources, require execution in isolated environments (e.g., gVisor, Firecracker microVMs) regardless of static analysis verdict. 6. HARDEN: Consider using safetensors format instead of pickle for model serialization where possible — it eliminates this attack class entirely.
Classification
Compliance Impact
This CVE is relevant to:
Technical Details
NVD Description
#Fickling's assessment `ctypes`, `importlib`, `runpy`, `code` and `multiprocessing` were added the list of unsafe imports (https://github.com/trailofbits/fickling/commit/9a2b3f89bd0598b528d62c10a64c1986fcb09f66, https://github.com/trailofbits/fickling/commit/eb299b453342f1931c787bcb3bc33f3a03a173f9, https://github.com/trailofbits/fickling/commit/29d5545e74b07766892c1f0461b801afccee4f91, https://github.com/trailofbits/fickling/commit/b793563e60a5e039c5837b09d7f4f6b92e6040d1, https://github.com/trailofbits/fickling/commit/b793563e60a5e039c5837b09d7f4f6b92e6040d1). # Original report ## Summary The `unsafe_imports()` method in Fickling's static analyzer fails to flag several high-risk Python modules that can be used for arbitrary code execution. Malicious pickles importing these modules will not be detected as unsafe, allowing attackers to bypass Fickling's primary static safety checks. ## Details In `fickling/fickle.py` lines 866-884, the `unsafe_imports()` method checks imported modules against a hardcoded tuple: ```python def unsafe_imports(self) -> Iterator[ast.Import | ast.ImportFrom]: for node in self.properties.imports: if node.module in ( "__builtin__", "__builtins__", "builtins", "os", "posix", "nt", "subprocess", "sys", "builtins", "socket", "pty", "marshal", "types", ): yield node ``` This list is incomplete. The following dangerous modules are NOT detected: - **ctypes**: Allows arbitrary memory access, calling C functions, and bypassing Python restrictions entirely - **importlib**: Can dynamically import any module at runtime - **runpy**: Can execute Python modules as scripts - **code**: Can compile and execute arbitrary Python code - **multiprocessing**: Can spawn processes with arbitrary code Since `ctypes` is part of the Python standard library, it also bypasses the `NonStandardImports` analysis. ## PoC ```python from fickling.fickle import Pickled from fickling.analysis import check_safety, Severity # Pickle that imports ctypes.pythonapi (allows arbitrary code execution) # PROTO 4, GLOBAL 'ctypes pythonapi', STOP payload = b'\x80\x04cctypes\npythonapi\n.' pickled = Pickled.load(payload) results = check_safety(pickled) print(f"Severity: {results.severity.name}") print(f"Is safe: {results.severity == Severity.LIKELY_SAFE}") # Output: Severity is LIKELY_SAFE or low - the ctypes import is not flagged # A truly malicious pickle using ctypes could execute arbitrary code ``` ## Impact **Security Bypass (Confidentiality, Integrity, Availability)** An attacker can craft a malicious pickle that: 1. Imports `ctypes` to gain arbitrary memory access 2. Uses `ctypes.pythonapi` or `ctypes.CDLL` to execute arbitrary code 3. Passes Fickling's safety analysis as "likely safe" 4. Executes malicious code when the victim loads the pickle after trusting Fickling's verdict This undermines the core purpose of Fickling as a pickle safety scanner.
Exploitation Scenario
An attacker publishes a poisoned PyTorch model to a public model registry or submits it via a third-party vendor handoff. The model file is a valid-looking pickle that on deserialization calls ctypes.CDLL or ctypes.pythonapi to load a shared library or execute a shellcode stub — establishing a reverse shell or exfiltrating cloud IAM credentials. The victim organization's MLOps pipeline runs Fickling as a pre-load safety gate; fickling <= 0.1.6 returns Severity.LIKELY_SAFE because ctypes is not in the unsafe module list. The pipeline proceeds to load the model on a GPU training node with access to S3 buckets, internal APIs, and GPU cluster credentials. The attacker gains persistent access to the training infrastructure.
Weaknesses (CWE)
References
- github.com/advisories/GHSA-q5qq-mvfm-j35x
- github.com/trailofbits/fickling/blob/977b0769c13537cd96549c12bb537f05464cf09c/test/test_bypasses.py
- github.com/trailofbits/fickling/commit/29d5545e74b07766892c1f0461b801afccee4f91
- github.com/trailofbits/fickling/commit/6b400e1a2525e6a4a076c97ccc0d4d9581317101
- github.com/trailofbits/fickling/commit/9a2b3f89bd0598b528d62c10a64c1986fcb09f66
- github.com/trailofbits/fickling/commit/b793563e60a5e039c5837b09d7f4f6b92e6040d1
- github.com/trailofbits/fickling/commit/eb299b453342f1931c787bcb3bc33f3a03a173f9
- github.com/trailofbits/fickling/pull/195
- github.com/trailofbits/fickling/releases/tag/v0.1.7
- github.com/trailofbits/fickling/security/advisories/GHSA-q5qq-mvfm-j35x
- nvd.nist.gov/vuln/detail/CVE-2026-22609