CVE-2024-4897: lollms-webui: RCE via malicious GGUF model loading

UNKNOWN PoC AVAILABLE CISA: ATTEND
Published July 2, 2024
CISO Take

Any deployment of lollms-webui with the bindings_zoo feature enabled is vulnerable to full server compromise — an attacker simply needs to trick a user into loading a crafted GGUF file hosted on HuggingFace. Patch is not yet available as of the disclosure commit; disable the bindings_zoo feature or take the instance offline until resolved. This is a supply chain failure: lollms-webui ships a known-vulnerable llama-cpp-python (CVE-2024-34359) and exposes it directly to untrusted model input.

Risk Assessment

Effective severity is CRITICAL despite missing CVSS scoring. RCE from a malicious model file requires no authentication if the bindings_zoo feature is exposed, and exploitation is straightforward given CVE-2024-34359's public disclosure. Attack surface is any organization running lollms-webui for internal LLM serving or experimentation — common in AI-forward security and R&D teams. The dependency on HuggingFace as a model source amplifies exposure since adversaries can publish weaponized models at zero cost.

Affected Systems

Package Ecosystem Vulnerable Range Patched
lollms_web_ui No patch

Do you use lollms_web_ui? You're affected.

Severity & Risk

CVSS 3.1
N/A
EPSS
0.8%
chance of exploitation in 30 days
Higher than 74% of all CVEs
Exploitation Status
Exploit Available
Exploitation: MEDIUM
Sophistication
Moderate
Exploitation Confidence
medium
CISA SSVC: Public PoC
Public PoC indexed (trickest/cve)
Composite signal derived from CISA KEV, CISA SSVC, EPSS, trickest/cve, and Nuclei templates.

Recommended Action

6 steps
  1. Immediately disable or restrict access to the bindings_zoo/binding_zoo feature in lollms-webui.

  2. Upgrade llama-cpp-python to a version patching CVE-2024-34359 (>=0.2.72 or vendor-confirmed patched build).

  3. Block loading of model files from untrusted external sources (HuggingFace, arbitrary URLs) at the network or application level.

  4. Run lollms-webui in a sandboxed environment (container with no-privilege, restricted filesystem, network egress controls).

  5. Audit server for indicators of compromise if bindings_zoo was enabled and accessible.

  6. Pin and scan all AI framework dependencies in CI/CD pipelines; add llama-cpp-python to your SCA tooling with alerts for known CVEs.

CISA SSVC Assessment

Decision Attend
Exploitation poc
Automatable No
Technical Impact total

Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Article 9 - Risk management system
ISO 42001
A.6.2 - AI supply chain A.9.3 - AI system security
NIST AI RMF
GOVERN-6.1 - Policies for third-party AI components MANAGE-2.2 - AI risk response — treatment of identified risks
OWASP LLM Top 10
LLM05 - Supply Chain Vulnerabilities

Frequently Asked Questions

What is CVE-2024-4897?

Any deployment of lollms-webui with the bindings_zoo feature enabled is vulnerable to full server compromise — an attacker simply needs to trick a user into loading a crafted GGUF file hosted on HuggingFace. Patch is not yet available as of the disclosure commit; disable the bindings_zoo feature or take the instance offline until resolved. This is a supply chain failure: lollms-webui ships a known-vulnerable llama-cpp-python (CVE-2024-34359) and exposes it directly to untrusted model input.

Is CVE-2024-4897 actively exploited?

Proof-of-concept exploit code is publicly available for CVE-2024-4897, increasing the risk of exploitation.

How to fix CVE-2024-4897?

1. Immediately disable or restrict access to the bindings_zoo/binding_zoo feature in lollms-webui. 2. Upgrade llama-cpp-python to a version patching CVE-2024-34359 (>=0.2.72 or vendor-confirmed patched build). 3. Block loading of model files from untrusted external sources (HuggingFace, arbitrary URLs) at the network or application level. 4. Run lollms-webui in a sandboxed environment (container with no-privilege, restricted filesystem, network egress controls). 5. Audit server for indicators of compromise if bindings_zoo was enabled and accessible. 6. Pin and scan all AI framework dependencies in CI/CD pipelines; add llama-cpp-python to your SCA tooling with alerts for known CVEs.

What systems are affected by CVE-2024-4897?

This vulnerability affects the following AI/ML architecture patterns: LLM inference servers, self-hosted model serving, AI development workstations, internal AI platforms, model experimentation environments.

What is the CVSS score for CVE-2024-4897?

No CVSS score has been assigned yet.

Technical Details

NVD Description

parisneo/lollms-webui, in its latest version, is vulnerable to remote code execution due to an insecure dependency on llama-cpp-python version llama_cpp_python-0.2.61+cpuavx2-cp311-cp311-manylinux_2_31_x86_64. The vulnerability arises from the application's 'binding_zoo' feature, which allows attackers to upload and interact with a malicious model file hosted on hugging-face, leading to remote code execution. The issue is linked to a known vulnerability in llama-cpp-python, CVE-2024-34359, which has not been patched in lollms-webui as of commit b454f40a. The vulnerability is exploitable through the application's handling of model files in the 'bindings_zoo' feature, specifically when processing gguf format model files.

Exploitation Scenario

Adversary creates a specially crafted GGUF model file embedding a deserialization payload that triggers OS command execution when parsed by the vulnerable llama-cpp-python. They publish this model on HuggingFace under a plausible name (e.g., a fine-tuned Llama variant). They then send a phishing link or social engineer a user of the target's lollms-webui instance into loading the malicious model via the bindings_zoo interface. Upon model load, the payload executes under the server process — attacker gains a reverse shell, dumps credentials from the host, and pivots to internal network resources or cloud metadata endpoints.

Weaknesses (CWE)

Timeline

Published
July 2, 2024
Last Modified
July 9, 2025
First Seen
July 2, 2024

Related Vulnerabilities