CVE-2025-66959: ollama: Input Validation flaw enables exploitation

HIGH PoC AVAILABLE CISA: TRACK*
Published January 21, 2026
CISO Take

CVE-2025-66959 is a network-exploitable DoS in Ollama's GGUF decoder requiring zero authentication — any Ollama instance exposed to untrusted networks is at immediate risk of being crashed. Patch to a version past 0.12.10 immediately and restrict Ollama's API port (default 11434) to localhost or trusted network segments only. If you cannot patch today, a firewall rule blocking external access to port 11434 is an effective temporary control.

Risk Assessment

HIGH. The CVSS 7.5 score understates operational risk for AI teams: GGUF is the dominant model format for self-hosted LLM inference, and Ollama is the de facto local LLM runtime for thousands of enterprise deployments. AC:Low + PR:None + UI:None means exploitation is trivial and automatable. The attack surface is wide — Ollama instances are routinely misconfigured to bind on 0.0.0.0 rather than localhost, making them reachable without any credential. The DoS impact translates directly to loss of AI-assisted workflows, internal copilot tools, and any production service proxying through Ollama.

Affected Systems

Package Ecosystem Vulnerable Range Patched
ollama pip No patch
170.6K 1.4K dependents Pushed 6d ago 5% patched ~0d to patch Full package profile →

Do you use ollama? You're affected.

Severity & Risk

CVSS 3.1
7.5 / 10
EPSS
0.3%
chance of exploitation in 30 days
Higher than 53% of all CVEs
Exploitation Status
Exploit Available
Exploitation: MEDIUM
Sophistication
Trivial
Exploitation Confidence
medium
CISA SSVC: Public PoC
Public PoC indexed (trickest/cve)
Composite signal derived from CISA KEV, CISA SSVC, EPSS, trickest/cve, and Nuclei templates.

Attack Surface

AV AC PR UI S C I A
AV Network
AC Low
PR None
UI None
S Unchanged
C None
I None
A High

Recommended Action

5 steps
  1. PATCH

    Upgrade Ollama beyond version 0.12.10. Monitor https://github.com/ollama/ollama/releases for a fixed release.

  2. NETWORK ISOLATION (immediate workaround): Ensure Ollama binds to 127.0.0.1 only (default is localhost, verify with ss -tlnp | grep 11434). Block port 11434 at the host firewall and any network perimeter for all non-whitelisted sources.

  3. REVERSE PROXY WITH AUTH

    If Ollama must be network-accessible, front it with nginx/Caddy requiring authentication — Ollama itself has no native auth.

  4. DETECTION

    Monitor for Ollama process crashes/restarts, unusual HTTP 5xx spikes on port 11434, and oversized or malformed POST payloads to /api endpoints. Alert on process exits from the Ollama service unit.

  5. INVENTORY

    Identify all Ollama instances in your environment — dev workstations with open Wi-Fi connections are a common overlooked exposure.

CISA SSVC Assessment

Decision Track*
Exploitation poc
Automatable Yes
Technical Impact partial

Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Article 15 - Accuracy, robustness and cybersecurity
ISO 42001
A.6.2.6 - AI system availability and resilience
NIST AI RMF
MANAGE 2.2 - Mechanisms to sustain deployed AI system value and manage risks MANAGE 2.4 - Residual risks are managed and monitored
OWASP LLM Top 10
LLM04 - Model Denial of Service

Frequently Asked Questions

What is CVE-2025-66959?

CVE-2025-66959 is a network-exploitable DoS in Ollama's GGUF decoder requiring zero authentication — any Ollama instance exposed to untrusted networks is at immediate risk of being crashed. Patch to a version past 0.12.10 immediately and restrict Ollama's API port (default 11434) to localhost or trusted network segments only. If you cannot patch today, a firewall rule blocking external access to port 11434 is an effective temporary control.

Is CVE-2025-66959 actively exploited?

Proof-of-concept exploit code is publicly available for CVE-2025-66959, increasing the risk of exploitation.

How to fix CVE-2025-66959?

1. PATCH: Upgrade Ollama beyond version 0.12.10. Monitor https://github.com/ollama/ollama/releases for a fixed release. 2. NETWORK ISOLATION (immediate workaround): Ensure Ollama binds to 127.0.0.1 only (default is localhost, verify with `ss -tlnp | grep 11434`). Block port 11434 at the host firewall and any network perimeter for all non-whitelisted sources. 3. REVERSE PROXY WITH AUTH: If Ollama must be network-accessible, front it with nginx/Caddy requiring authentication — Ollama itself has no native auth. 4. DETECTION: Monitor for Ollama process crashes/restarts, unusual HTTP 5xx spikes on port 11434, and oversized or malformed POST payloads to /api endpoints. Alert on process exits from the Ollama service unit. 5. INVENTORY: Identify all Ollama instances in your environment — dev workstations with open Wi-Fi connections are a common overlooked exposure.

What systems are affected by CVE-2025-66959?

This vulnerability affects the following AI/ML architecture patterns: model serving, local LLM inference, RAG pipelines, AI development environments, internal AI copilot infrastructure.

What is the CVSS score for CVE-2025-66959?

CVE-2025-66959 has a CVSS v3.1 base score of 7.5 (HIGH). The EPSS exploitation probability is 0.29%.

Technical Details

NVD Description

An issue in ollama v.0.12.10 allows a remote attacker to cause a denial of service via the GGUF decoder

Exploitation Scenario

An adversary scans corporate IP ranges or cloud VPC subnets for open port 11434 (Ollama default). Upon finding a responsive instance, they craft a GGUF file with a maliciously oversized or invalid length field in the decoder metadata — as documented in the PoC blog referenced in the CVE. They POST this payload to Ollama's model load or generate endpoint. The GGUF decoder attempts to copy a buffer of the attacker-controlled length without bounds checking, triggering a panic that crashes the Ollama process. If Ollama lacks a process supervisor (systemd with Restart=always), the service stays down. In environments where AI copilots, RAG systems, or model-serving APIs depend on this Ollama instance, the downstream services become unavailable — causing a cascading outage without requiring any credentials or prior access.

Weaknesses (CWE)

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H

Timeline

Published
January 21, 2026
Last Modified
February 2, 2026
First Seen
January 21, 2026

Related Vulnerabilities