CVE-2025-1975: Ollama: DoS via malicious manifest in /api/pull

UNKNOWN PoC AVAILABLE CISA: TRACK*
Published May 16, 2025
CISO Take

Ollama 0.5.11 crashes when processing a crafted model manifest through the /api/pull endpoint due to missing array index validation. Any user with network access to your Ollama instance can take down your LLM inference service. Update immediately and restrict /api/pull to trusted networks or authenticated users.

Risk Assessment

Risk is HIGH for organizations running Ollama in shared or network-accessible environments. Ollama ships with no authentication by default, meaning any network-reachable instance is trivially exploitable. The crash is deterministic — a single malformed request suffices. In DevOps and MLOps pipelines where Ollama runs as a shared inference backend, this translates directly to service disruption across dependent AI workloads. No evidence of active exploitation yet, but the exploit surface is large given Ollama's adoption in enterprise AI labs.

Affected Systems

Package Ecosystem Vulnerable Range Patched
ollama pip No patch
171.1K 1.5K dependents Pushed today 4% patched ~0d to patch Full package profile →

Do you use ollama? You're affected.

Severity & Risk

CVSS 3.1
N/A
EPSS
0.5%
chance of exploitation in 30 days
Higher than 66% of all CVEs
Exploitation Status
Exploit Available
Exploitation: MEDIUM
Sophistication
Trivial
Exploitation Confidence
medium
CISA SSVC: Public PoC
Public PoC indexed (trickest/cve)
Composite signal derived from CISA KEV, CISA SSVC, EPSS, trickest/cve, and Nuclei templates.

Recommended Action

5 steps
  1. PATCH

    Upgrade Ollama beyond 0.5.11 immediately. Check https://github.com/ollama/ollama/releases for the fixed version.

  2. NETWORK ISOLATION

    Restrict Ollama port (default 11434) to localhost or trusted subnets only using firewall rules. Never expose Ollama directly to the internet.

  3. AUTHENTICATION PROXY

    Place a reverse proxy (nginx, Caddy) with authentication in front of Ollama if multi-user access is required.

  4. DETECTION

    Alert on repeated 5xx errors or unexpected Ollama process restarts. Monitor for anomalous POST /api/pull requests from unexpected sources.

  5. WORKAROUND (if patching is not immediate): Disable or firewall the /api/pull endpoint if model pulling is not required at runtime.

CISA SSVC Assessment

Decision Track*
Exploitation poc
Automatable Yes
Technical Impact partial

Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Article 9 - Risk management system
ISO 42001
8.4 - AI system operation
NIST AI RMF
MANAGE 2.2 - Mechanisms to sustain effectiveness of risk responses
OWASP LLM Top 10
LLM04 - Model Denial of Service

Frequently Asked Questions

What is CVE-2025-1975?

Ollama 0.5.11 crashes when processing a crafted model manifest through the /api/pull endpoint due to missing array index validation. Any user with network access to your Ollama instance can take down your LLM inference service. Update immediately and restrict /api/pull to trusted networks or authenticated users.

Is CVE-2025-1975 actively exploited?

Proof-of-concept exploit code is publicly available for CVE-2025-1975, increasing the risk of exploitation.

How to fix CVE-2025-1975?

1. PATCH: Upgrade Ollama beyond 0.5.11 immediately. Check https://github.com/ollama/ollama/releases for the fixed version. 2. NETWORK ISOLATION: Restrict Ollama port (default 11434) to localhost or trusted subnets only using firewall rules. Never expose Ollama directly to the internet. 3. AUTHENTICATION PROXY: Place a reverse proxy (nginx, Caddy) with authentication in front of Ollama if multi-user access is required. 4. DETECTION: Alert on repeated 5xx errors or unexpected Ollama process restarts. Monitor for anomalous POST /api/pull requests from unexpected sources. 5. WORKAROUND (if patching is not immediate): Disable or firewall the /api/pull endpoint if model pulling is not required at runtime.

What systems are affected by CVE-2025-1975?

This vulnerability affects the following AI/ML architecture patterns: model serving, LLM inference, RAG pipelines, agent frameworks, local AI deployments.

What is the CVSS score for CVE-2025-1975?

No CVSS score has been assigned yet.

Technical Details

NVD Description

A vulnerability in the Ollama server version 0.5.11 allows a malicious user to cause a Denial of Service (DoS) attack by customizing the manifest content and spoofing a service. This is due to improper validation of array index access when downloading a model via the /api/pull endpoint, which can lead to a server crash.

Exploitation Scenario

An attacker with network access to an Ollama instance — an insider, compromised developer machine, or lateral movement from another host — sends a POST request to /api/pull with a crafted manifest payload that includes malformed array indices. The Ollama server attempts to access an out-of-bounds array index during manifest parsing, triggering a panic/crash. The attacker can repeat this after each restart to maintain a persistent DoS condition, effectively taking down any AI application stack dependent on that Ollama instance (chatbots, RAG pipelines, agentic workflows).

Weaknesses (CWE)

Timeline

Published
May 16, 2025
Last Modified
June 24, 2025
First Seen
May 16, 2025

Related Vulnerabilities