CVE-2025-66959: ollama: Input Validation flaw enables exploitation
HIGH PoC AVAILABLE CISA: TRACK*CVE-2025-66959 is a network-exploitable DoS in Ollama's GGUF decoder requiring zero authentication — any Ollama instance exposed to untrusted networks is at immediate risk of being crashed. Patch to a version past 0.12.10 immediately and restrict Ollama's API port (default 11434) to localhost or trusted network segments only. If you cannot patch today, a firewall rule blocking external access to port 11434 is an effective temporary control.
Risk Assessment
HIGH. The CVSS 7.5 score understates operational risk for AI teams: GGUF is the dominant model format for self-hosted LLM inference, and Ollama is the de facto local LLM runtime for thousands of enterprise deployments. AC:Low + PR:None + UI:None means exploitation is trivial and automatable. The attack surface is wide — Ollama instances are routinely misconfigured to bind on 0.0.0.0 rather than localhost, making them reachable without any credential. The DoS impact translates directly to loss of AI-assisted workflows, internal copilot tools, and any production service proxying through Ollama.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| ollama | pip | — | No patch |
Do you use ollama? You're affected.
Severity & Risk
Attack Surface
Recommended Action
5 steps-
PATCH
Upgrade Ollama beyond version 0.12.10. Monitor https://github.com/ollama/ollama/releases for a fixed release.
-
NETWORK ISOLATION (immediate workaround): Ensure Ollama binds to 127.0.0.1 only (default is localhost, verify with
ss -tlnp | grep 11434). Block port 11434 at the host firewall and any network perimeter for all non-whitelisted sources. -
REVERSE PROXY WITH AUTH
If Ollama must be network-accessible, front it with nginx/Caddy requiring authentication — Ollama itself has no native auth.
-
DETECTION
Monitor for Ollama process crashes/restarts, unusual HTTP 5xx spikes on port 11434, and oversized or malformed POST payloads to /api endpoints. Alert on process exits from the Ollama service unit.
-
INVENTORY
Identify all Ollama instances in your environment — dev workstations with open Wi-Fi connections are a common overlooked exposure.
CISA SSVC Assessment
Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2025-66959?
CVE-2025-66959 is a network-exploitable DoS in Ollama's GGUF decoder requiring zero authentication — any Ollama instance exposed to untrusted networks is at immediate risk of being crashed. Patch to a version past 0.12.10 immediately and restrict Ollama's API port (default 11434) to localhost or trusted network segments only. If you cannot patch today, a firewall rule blocking external access to port 11434 is an effective temporary control.
Is CVE-2025-66959 actively exploited?
Proof-of-concept exploit code is publicly available for CVE-2025-66959, increasing the risk of exploitation.
How to fix CVE-2025-66959?
1. PATCH: Upgrade Ollama beyond version 0.12.10. Monitor https://github.com/ollama/ollama/releases for a fixed release. 2. NETWORK ISOLATION (immediate workaround): Ensure Ollama binds to 127.0.0.1 only (default is localhost, verify with `ss -tlnp | grep 11434`). Block port 11434 at the host firewall and any network perimeter for all non-whitelisted sources. 3. REVERSE PROXY WITH AUTH: If Ollama must be network-accessible, front it with nginx/Caddy requiring authentication — Ollama itself has no native auth. 4. DETECTION: Monitor for Ollama process crashes/restarts, unusual HTTP 5xx spikes on port 11434, and oversized or malformed POST payloads to /api endpoints. Alert on process exits from the Ollama service unit. 5. INVENTORY: Identify all Ollama instances in your environment — dev workstations with open Wi-Fi connections are a common overlooked exposure.
What systems are affected by CVE-2025-66959?
This vulnerability affects the following AI/ML architecture patterns: model serving, local LLM inference, RAG pipelines, AI development environments, internal AI copilot infrastructure.
What is the CVSS score for CVE-2025-66959?
CVE-2025-66959 has a CVSS v3.1 base score of 7.5 (HIGH). The EPSS exploitation probability is 0.29%.
Technical Details
NVD Description
An issue in ollama v.0.12.10 allows a remote attacker to cause a denial of service via the GGUF decoder
Exploitation Scenario
An adversary scans corporate IP ranges or cloud VPC subnets for open port 11434 (Ollama default). Upon finding a responsive instance, they craft a GGUF file with a maliciously oversized or invalid length field in the decoder metadata — as documented in the PoC blog referenced in the CVE. They POST this payload to Ollama's model load or generate endpoint. The GGUF decoder attempts to copy a buffer of the attacker-controlled length without bounds checking, triggering a panic that crashes the Ollama process. If Ollama lacks a process supervisor (systemd with Restart=always), the service stays down. In environments where AI copilots, RAG systems, or model-serving APIs depend on this Ollama instance, the downstream services become unavailable — causing a cascading outage without requiring any credentials or prior access.
CVSS Vector
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H References
- github.com/ollama/ollama/issues/9820 Exploit Issue Vendor
- zero.shotlearni.ng/blog/cve-2025-66959panic-dos-via-unchecked-length-in-gguf-decoder-copy/ Exploit 3rd Party
Timeline
Related Vulnerabilities
CVE-2025-63389 9.8 ollama: Missing Auth allows unauthenticated access
Same package: ollama CVE-2026-44007 9.1 vm2: sandbox escape via nesting:true enables RCE
Same package: ollama CVE-2024-37032 8.8 Ollama: path traversal enables RCE via model blob API
Same package: ollama CVE-2024-39720 8.2 Ollama: OOB read in GGUF parser enables remote DoS
Same package: ollama CVE-2024-39719 7.5 Ollama: file existence oracle via api/create errors
Same package: ollama
AI Threat Alert