CVE-2025-44779: Ollama: arbitrary file deletion via /api/pull
MEDIUM PoC AVAILABLE CISA: TRACK*Ollama v0.1.33 allows any local user to delete arbitrary files by sending a crafted request to the /api/pull endpoint—no privileges required. Environments running Ollama for LLM inference, including developer workstations and internal GPU servers, should restrict API access to trusted processes and update to a patched release. The local attack vector limits internet exposure, but file deletion targeting model weights, configs, or security tooling is a credible availability and integrity risk.
Risk Assessment
Medium risk overall but elevated for AI development and shared inference environments. The local attack vector (AV:L) prevents direct internet exploitation, yet Ollama is widely deployed on developer workstations and internal servers that often lack network isolation on port 11434. No privileges are required (PR:N), meaning any local user or co-resident process can trigger the attack. User interaction (UI:R) adds a mild barrier—typically bypassed via social engineering or a malicious wrapper script. High availability impact (A:H) combined with trivial exploitability makes this dangerous wherever Ollama runs with broad filesystem access.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| ollama | pip | — | No patch |
Do you use ollama? You're affected.
Severity & Risk
Attack Surface
Recommended Action
6 steps-
Update Ollama beyond v0.1.33—check the official GitHub repository (github.com/ollama/ollama) for the patched release; no confirmed patch version was available in NVD at time of analysis.
-
Restrict access to the Ollama API (default port 11434) via firewall rules or by binding exclusively to 127.0.0.1; never expose it to untrusted networks.
-
Run Ollama under a dedicated service account with the minimum filesystem permissions required—model directory only.
-
Monitor and alert on anomalous requests to the /api/pull endpoint, particularly payloads with path traversal patterns (../, %2F, encoded slashes).
-
Audit all Ollama deployments in CI/CD pipelines and shared developer environments.
-
Consider AppArmor or seccomp profiles to restrict the filesystem operations Ollama can perform.
CISA SSVC Assessment
Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2025-44779?
Ollama v0.1.33 allows any local user to delete arbitrary files by sending a crafted request to the /api/pull endpoint—no privileges required. Environments running Ollama for LLM inference, including developer workstations and internal GPU servers, should restrict API access to trusted processes and update to a patched release. The local attack vector limits internet exposure, but file deletion targeting model weights, configs, or security tooling is a credible availability and integrity risk.
Is CVE-2025-44779 actively exploited?
Proof-of-concept exploit code is publicly available for CVE-2025-44779, increasing the risk of exploitation.
How to fix CVE-2025-44779?
1. Update Ollama beyond v0.1.33—check the official GitHub repository (github.com/ollama/ollama) for the patched release; no confirmed patch version was available in NVD at time of analysis. 2. Restrict access to the Ollama API (default port 11434) via firewall rules or by binding exclusively to 127.0.0.1; never expose it to untrusted networks. 3. Run Ollama under a dedicated service account with the minimum filesystem permissions required—model directory only. 4. Monitor and alert on anomalous requests to the /api/pull endpoint, particularly payloads with path traversal patterns (../, %2F, encoded slashes). 5. Audit all Ollama deployments in CI/CD pipelines and shared developer environments. 6. Consider AppArmor or seccomp profiles to restrict the filesystem operations Ollama can perform.
What systems are affected by CVE-2025-44779?
This vulnerability affects the following AI/ML architecture patterns: local LLM inference, model serving, AI development workstations, internal AI infrastructure, CI/CD model pipelines.
What is the CVSS score for CVE-2025-44779?
CVE-2025-44779 has a CVSS v3.1 base score of 6.6 (MEDIUM). The EPSS exploitation probability is 0.04%.
Technical Details
NVD Description
An issue in Ollama v0.1.33 allows attackers to delete arbitrary files via sending a crafted packet to the endpoint /api/pull.
Exploitation Scenario
An attacker with local access to a machine running Ollama—whether via a compromised developer account, a malicious process sharing the host, or a CSRF-triggered request from the browser—sends a crafted HTTP POST to http://localhost:11434/api/pull with a manipulated model name parameter that traverses the filesystem. Due to CWE-20 (improper input validation) and CWE-552 (unauthorized file access), the Ollama service processes the malformed input and deletes an arbitrary file accessible under its running permissions. In a realistic scenario, an attacker embeds the malicious pull request inside a developer tool or shell script, satisfying the UI:R requirement through social engineering. Target files could include model weights (DoS on inference), Ollama's config (service crash), or SSH authorized_keys (persistence setup for further compromise).
CVSS Vector
CVSS:3.1/AV:L/AC:L/PR:N/UI:R/S:U/C:L/I:L/A:H References
Timeline
Related Vulnerabilities
CVE-2025-63389 9.8 ollama: Missing Auth allows unauthenticated access
Same package: ollama CVE-2026-44007 9.1 vm2: sandbox escape via nesting:true enables RCE
Same package: ollama CVE-2026-7482 9.1 Ollama: heap OOB read leaks API keys and chat data
Same package: ollama CVE-2024-37032 8.8 Ollama: path traversal enables RCE via model blob API
Same package: ollama CVE-2024-39720 8.2 Ollama: OOB read in GGUF parser enables remote DoS
Same package: ollama
AI Threat Alert