CVE-2025-66960: ollama: Input Validation flaw enables exploitation

HIGH PoC AVAILABLE CISA: TRACK*
Published January 21, 2026
CISO Take

Ollama's GGUF v1 parser reads attacker-controlled string lengths without validation, letting any network-reachable adversary crash your inference service by serving a malicious model file — no credentials or prior access needed. If your team runs Ollama in production, CI/CD pipelines, or dev environments that pull external models, patch immediately and restrict model sources to verified registries. Treat all externally-sourced GGUF files as untrusted until upgraded.

Risk Assessment

High risk for organizations with Ollama in their AI stack. CVSS 7.5 with no authentication, no user interaction, and low complexity means exploitation requires minimal skill. Impact is availability-only (no data exposure), but inference service crashes disrupt all downstream AI-dependent applications. Highest exposure for teams with Ollama API (port 11434) network-accessible or automated model pull pipelines.

Affected Systems

Package Ecosystem Vulnerable Range Patched
ollama pip No patch
170.6K 1.4K dependents Pushed 6d ago 5% patched ~0d to patch Full package profile →

Do you use ollama? You're affected.

Severity & Risk

CVSS 3.1
7.5 / 10
EPSS
0.3%
chance of exploitation in 30 days
Higher than 53% of all CVEs
Exploitation Status
Exploit Available
Exploitation: MEDIUM
Sophistication
Trivial
Exploitation Confidence
medium
CISA SSVC: Public PoC
Public PoC indexed (trickest/cve)
Composite signal derived from CISA KEV, CISA SSVC, EPSS, trickest/cve, and Nuclei templates.

Attack Surface

AV AC PR UI S C I A
AV Network
AC Low
PR None
UI None
S Unchanged
C None
I None
A High

Recommended Action

6 steps
  1. Patch: Upgrade Ollama to the fixed version — check GitHub releases for CVE-2025-66960 resolution.

  2. Network isolation: Restrict Ollama API access (default port 11434) to trusted internal IPs via firewall rules; never expose publicly.

  3. Model provenance: Only pull models from verified, hash-validated sources — block untrusted community GGUF files via allowlist policy.

  4. Process supervision: Run Ollama under systemd or supervisord with auto-restart to limit DoS downtime impact.

  5. Detection: Alert on unexpected Ollama process exits or OOM kills in system and application logs.

  6. Inventory: Audit all Ollama deployments across dev, staging, and prod — shadow IT instances are the highest risk.

CISA SSVC Assessment

Decision Track*
Exploitation poc
Automatable Yes
Technical Impact partial

Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Art. 15 - Accuracy, robustness and cybersecurity
ISO 42001
8.4 - AI system operation and monitoring A.9.7 - Information security for AI systems
NIST AI RMF
MANAGE 2.2 - Mechanisms to address and manage AI risks MANAGE-2.2 - Residual risks and AI system impacts are monitored on an ongoing basis
OWASP LLM Top 10
LLM03:2025 - Supply Chain Vulnerabilities LLM04 - Model Denial of Service LLM10:2025 - Unbounded Consumption

Frequently Asked Questions

What is CVE-2025-66960?

Ollama's GGUF v1 parser reads attacker-controlled string lengths without validation, letting any network-reachable adversary crash your inference service by serving a malicious model file — no credentials or prior access needed. If your team runs Ollama in production, CI/CD pipelines, or dev environments that pull external models, patch immediately and restrict model sources to verified registries. Treat all externally-sourced GGUF files as untrusted until upgraded.

Is CVE-2025-66960 actively exploited?

Proof-of-concept exploit code is publicly available for CVE-2025-66960, increasing the risk of exploitation.

How to fix CVE-2025-66960?

1. Patch: Upgrade Ollama to the fixed version — check GitHub releases for CVE-2025-66960 resolution. 2. Network isolation: Restrict Ollama API access (default port 11434) to trusted internal IPs via firewall rules; never expose publicly. 3. Model provenance: Only pull models from verified, hash-validated sources — block untrusted community GGUF files via allowlist policy. 4. Process supervision: Run Ollama under systemd or supervisord with auto-restart to limit DoS downtime impact. 5. Detection: Alert on unexpected Ollama process exits or OOM kills in system and application logs. 6. Inventory: Audit all Ollama deployments across dev, staging, and prod — shadow IT instances are the highest risk.

What systems are affected by CVE-2025-66960?

This vulnerability affects the following AI/ML architecture patterns: model serving, local LLM inference, self-hosted LLM deployments, MLOps pipelines, AI development environments.

What is the CVSS score for CVE-2025-66960?

CVE-2025-66960 has a CVSS v3.1 base score of 7.5 (HIGH). The EPSS exploitation probability is 0.29%.

Technical Details

NVD Description

An issue in ollama v.0.12.10 allows a remote attacker to cause a denial of service via the fs/ggml/gguf.go, function readGGUFV1String reads a string length from untrusted GGUF metadata

Exploitation Scenario

An adversary crafts a GGUF v1 model file with a maliciously oversized string length value in the metadata header (e.g., 0xFFFFFFFF bytes). They publish it to a public model hub or host it on an attacker-controlled server. When an engineer runs 'ollama pull attacker/malicious-model' or an automated MLOps pipeline fetches and evaluates new models from external sources, Ollama's readGGUFV1String in fs/ggml/gguf.go reads the attacker-controlled length and attempts to allocate or read that many bytes, triggering a Go panic. The Ollama service crashes immediately, dropping all active inference sessions. In environments with fully automated model evaluation pipelines, this attack can be triggered repeatedly without any human interaction.

Weaknesses (CWE)

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H

Timeline

Published
January 21, 2026
Last Modified
February 2, 2026
First Seen
January 21, 2026

Related Vulnerabilities