CVE-2025-1975: Ollama: DoS via malicious manifest in /api/pull
UNKNOWN PoC AVAILABLE CISA: TRACK*Ollama 0.5.11 crashes when processing a crafted model manifest through the /api/pull endpoint due to missing array index validation. Any user with network access to your Ollama instance can take down your LLM inference service. Update immediately and restrict /api/pull to trusted networks or authenticated users.
Risk Assessment
Risk is HIGH for organizations running Ollama in shared or network-accessible environments. Ollama ships with no authentication by default, meaning any network-reachable instance is trivially exploitable. The crash is deterministic — a single malformed request suffices. In DevOps and MLOps pipelines where Ollama runs as a shared inference backend, this translates directly to service disruption across dependent AI workloads. No evidence of active exploitation yet, but the exploit surface is large given Ollama's adoption in enterprise AI labs.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| ollama | pip | — | No patch |
Do you use ollama? You're affected.
Severity & Risk
Recommended Action
5 steps-
PATCH
Upgrade Ollama beyond 0.5.11 immediately. Check https://github.com/ollama/ollama/releases for the fixed version.
-
NETWORK ISOLATION
Restrict Ollama port (default 11434) to localhost or trusted subnets only using firewall rules. Never expose Ollama directly to the internet.
-
AUTHENTICATION PROXY
Place a reverse proxy (nginx, Caddy) with authentication in front of Ollama if multi-user access is required.
-
DETECTION
Alert on repeated 5xx errors or unexpected Ollama process restarts. Monitor for anomalous POST /api/pull requests from unexpected sources.
-
WORKAROUND (if patching is not immediate): Disable or firewall the /api/pull endpoint if model pulling is not required at runtime.
CISA SSVC Assessment
Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2025-1975?
Ollama 0.5.11 crashes when processing a crafted model manifest through the /api/pull endpoint due to missing array index validation. Any user with network access to your Ollama instance can take down your LLM inference service. Update immediately and restrict /api/pull to trusted networks or authenticated users.
Is CVE-2025-1975 actively exploited?
Proof-of-concept exploit code is publicly available for CVE-2025-1975, increasing the risk of exploitation.
How to fix CVE-2025-1975?
1. PATCH: Upgrade Ollama beyond 0.5.11 immediately. Check https://github.com/ollama/ollama/releases for the fixed version. 2. NETWORK ISOLATION: Restrict Ollama port (default 11434) to localhost or trusted subnets only using firewall rules. Never expose Ollama directly to the internet. 3. AUTHENTICATION PROXY: Place a reverse proxy (nginx, Caddy) with authentication in front of Ollama if multi-user access is required. 4. DETECTION: Alert on repeated 5xx errors or unexpected Ollama process restarts. Monitor for anomalous POST /api/pull requests from unexpected sources. 5. WORKAROUND (if patching is not immediate): Disable or firewall the /api/pull endpoint if model pulling is not required at runtime.
What systems are affected by CVE-2025-1975?
This vulnerability affects the following AI/ML architecture patterns: model serving, LLM inference, RAG pipelines, agent frameworks, local AI deployments.
What is the CVSS score for CVE-2025-1975?
No CVSS score has been assigned yet.
Technical Details
NVD Description
A vulnerability in the Ollama server version 0.5.11 allows a malicious user to cause a Denial of Service (DoS) attack by customizing the manifest content and spoofing a service. This is due to improper validation of array index access when downloading a model via the /api/pull endpoint, which can lead to a server crash.
Exploitation Scenario
An attacker with network access to an Ollama instance — an insider, compromised developer machine, or lateral movement from another host — sends a POST request to /api/pull with a crafted manifest payload that includes malformed array indices. The Ollama server attempts to access an out-of-bounds array index during manifest parsing, triggering a panic/crash. The attacker can repeat this after each restart to maintain a persistent DoS condition, effectively taking down any AI application stack dependent on that Ollama instance (chatbots, RAG pipelines, agentic workflows).
Weaknesses (CWE)
References
- huntr.com/bounties/921ba5d4-f1d0-4c66-9764-4f72dffe7acd Exploit 3rd Party
- github.com/ARPSyndicate/cve-scores Exploit
Timeline
Related Vulnerabilities
CVE-2025-63389 9.8 ollama: Missing Auth allows unauthenticated access
Same package: ollama CVE-2026-7482 9.1 Ollama: heap OOB read leaks API keys and chat data
Same package: ollama CVE-2026-44007 9.1 vm2: sandbox escape via nesting:true enables RCE
Same package: ollama CVE-2024-37032 8.8 Ollama: path traversal enables RCE via model blob API
Same package: ollama CVE-2024-39720 8.2 Ollama: OOB read in GGUF parser enables remote DoS
Same package: ollama
AI Threat Alert