CVE-2025-44779: Ollama: arbitrary file deletion via /api/pull

MEDIUM PoC AVAILABLE CISA: TRACK*
Published August 7, 2025
CISO Take

Ollama v0.1.33 allows any local user to delete arbitrary files by sending a crafted request to the /api/pull endpoint—no privileges required. Environments running Ollama for LLM inference, including developer workstations and internal GPU servers, should restrict API access to trusted processes and update to a patched release. The local attack vector limits internet exposure, but file deletion targeting model weights, configs, or security tooling is a credible availability and integrity risk.

Risk Assessment

Medium risk overall but elevated for AI development and shared inference environments. The local attack vector (AV:L) prevents direct internet exploitation, yet Ollama is widely deployed on developer workstations and internal servers that often lack network isolation on port 11434. No privileges are required (PR:N), meaning any local user or co-resident process can trigger the attack. User interaction (UI:R) adds a mild barrier—typically bypassed via social engineering or a malicious wrapper script. High availability impact (A:H) combined with trivial exploitability makes this dangerous wherever Ollama runs with broad filesystem access.

Affected Systems

Package Ecosystem Vulnerable Range Patched
ollama pip No patch
170.6K 1.4K dependents Pushed 6d ago 5% patched ~0d to patch Full package profile →

Do you use ollama? You're affected.

Severity & Risk

CVSS 3.1
6.6 / 10
EPSS
0.0%
chance of exploitation in 30 days
Higher than 12% of all CVEs
Exploitation Status
Exploit Available
Exploitation: MEDIUM
Sophistication
Trivial
Exploitation Confidence
medium
CISA SSVC: Public PoC
Public PoC indexed (trickest/cve)
Composite signal derived from CISA KEV, CISA SSVC, EPSS, trickest/cve, and Nuclei templates.

Attack Surface

AV AC PR UI S C I A
AV Local
AC Low
PR None
UI Required
S Unchanged
C Low
I Low
A High

Recommended Action

6 steps
  1. Update Ollama beyond v0.1.33—check the official GitHub repository (github.com/ollama/ollama) for the patched release; no confirmed patch version was available in NVD at time of analysis.

  2. Restrict access to the Ollama API (default port 11434) via firewall rules or by binding exclusively to 127.0.0.1; never expose it to untrusted networks.

  3. Run Ollama under a dedicated service account with the minimum filesystem permissions required—model directory only.

  4. Monitor and alert on anomalous requests to the /api/pull endpoint, particularly payloads with path traversal patterns (../, %2F, encoded slashes).

  5. Audit all Ollama deployments in CI/CD pipelines and shared developer environments.

  6. Consider AppArmor or seccomp profiles to restrict the filesystem operations Ollama can perform.

CISA SSVC Assessment

Decision Track*
Exploitation poc
Automatable No
Technical Impact partial

Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Article 15 - Accuracy, robustness and cybersecurity
ISO 42001
A.6.2 - AI system security
NIST AI RMF
MANAGE 2.2 - Mechanisms to sustain the value of deployed AI systems
OWASP LLM Top 10
LLM06:2025 - Excessive Agency

Frequently Asked Questions

What is CVE-2025-44779?

Ollama v0.1.33 allows any local user to delete arbitrary files by sending a crafted request to the /api/pull endpoint—no privileges required. Environments running Ollama for LLM inference, including developer workstations and internal GPU servers, should restrict API access to trusted processes and update to a patched release. The local attack vector limits internet exposure, but file deletion targeting model weights, configs, or security tooling is a credible availability and integrity risk.

Is CVE-2025-44779 actively exploited?

Proof-of-concept exploit code is publicly available for CVE-2025-44779, increasing the risk of exploitation.

How to fix CVE-2025-44779?

1. Update Ollama beyond v0.1.33—check the official GitHub repository (github.com/ollama/ollama) for the patched release; no confirmed patch version was available in NVD at time of analysis. 2. Restrict access to the Ollama API (default port 11434) via firewall rules or by binding exclusively to 127.0.0.1; never expose it to untrusted networks. 3. Run Ollama under a dedicated service account with the minimum filesystem permissions required—model directory only. 4. Monitor and alert on anomalous requests to the /api/pull endpoint, particularly payloads with path traversal patterns (../, %2F, encoded slashes). 5. Audit all Ollama deployments in CI/CD pipelines and shared developer environments. 6. Consider AppArmor or seccomp profiles to restrict the filesystem operations Ollama can perform.

What systems are affected by CVE-2025-44779?

This vulnerability affects the following AI/ML architecture patterns: local LLM inference, model serving, AI development workstations, internal AI infrastructure, CI/CD model pipelines.

What is the CVSS score for CVE-2025-44779?

CVE-2025-44779 has a CVSS v3.1 base score of 6.6 (MEDIUM). The EPSS exploitation probability is 0.04%.

Technical Details

NVD Description

An issue in Ollama v0.1.33 allows attackers to delete arbitrary files via sending a crafted packet to the endpoint /api/pull.

Exploitation Scenario

An attacker with local access to a machine running Ollama—whether via a compromised developer account, a malicious process sharing the host, or a CSRF-triggered request from the browser—sends a crafted HTTP POST to http://localhost:11434/api/pull with a manipulated model name parameter that traverses the filesystem. Due to CWE-20 (improper input validation) and CWE-552 (unauthorized file access), the Ollama service processes the malformed input and deletes an arbitrary file accessible under its running permissions. In a realistic scenario, an attacker embeds the malicious pull request inside a developer tool or shell script, satisfying the UI:R requirement through social engineering. Target files could include model weights (DoS on inference), Ollama's config (service crash), or SSH authorized_keys (persistence setup for further compromise).

Weaknesses (CWE)

CVSS Vector

CVSS:3.1/AV:L/AC:L/PR:N/UI:R/S:U/C:L/I:L/A:H

Timeline

Published
August 7, 2025
Last Modified
August 14, 2025
First Seen
August 7, 2025

Related Vulnerabilities