CVE-2025-63389: ollama: Missing Auth allows unauthenticated access
CRITICAL CISA: TRACK*Ollama instances exposed to any network interface — including internal networks and cloud VMs — are fully compromised with a single unauthenticated HTTP call. An attacker can pull, delete, or replace your production LLM models without leaving traditional credentials-based IOCs. Immediately firewall Ollama to localhost-only and verify no running instance is reachable beyond 127.0.0.1:11434; treat any internet-exposed Ollama as a confirmed incident.
Risk Assessment
Critically high. CVSS 9.8 reflects the zero-barrier exploitation: network-accessible, no credentials, no user interaction, no complexity. Ollama is widely deployed by enterprise teams for local and on-prem LLM inference, often with default settings that bind to 0.0.0.0. The attack surface is unusually broad — developers, MLOps teams, and production inference nodes are all potentially exposed. There is no authentication layer to exploit; this is a missing feature, making detection via auth logs impossible and exploitation indistinguishable from legitimate traffic.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| ollama | pip | — | No patch |
Do you use ollama? You're affected.
Severity & Risk
Attack Surface
Recommended Action
6 steps-
PATCH
Upgrade Ollama to a version beyond v0.12.3 when available; monitor the official GitHub releases actively.
-
ISOLATE IMMEDIATELY
Ensure Ollama binds only to 127.0.0.1 — set OLLAMA_HOST=127.0.0.1 in the service environment before restarting.
-
FIREWALL
Block TCP/11434 at the host firewall level (iptables/nftables/security groups) — do not rely on application-level binding alone.
-
REVERSE PROXY WITH AUTH
Place an authenticated reverse proxy (nginx/Caddy with basic auth or mTLS) in front of any Ollama instance that must be network-accessible.
-
AUDIT
Run
ollama listand cross-reference model hashes against known-good pull manifests; any unexpected model is a potential IOC. -
DETECT
Alert on unexpected HTTP POST to /api/pull, /api/delete, or /api/copy from any source other than localhost in network logs or WAF rules.
CISA SSVC Assessment
Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2025-63389?
Ollama instances exposed to any network interface — including internal networks and cloud VMs — are fully compromised with a single unauthenticated HTTP call. An attacker can pull, delete, or replace your production LLM models without leaving traditional credentials-based IOCs. Immediately firewall Ollama to localhost-only and verify no running instance is reachable beyond 127.0.0.1:11434; treat any internet-exposed Ollama as a confirmed incident.
Is CVE-2025-63389 actively exploited?
No confirmed active exploitation of CVE-2025-63389 has been reported, but organizations should still patch proactively.
How to fix CVE-2025-63389?
1. PATCH: Upgrade Ollama to a version beyond v0.12.3 when available; monitor the official GitHub releases actively. 2. ISOLATE IMMEDIATELY: Ensure Ollama binds only to 127.0.0.1 — set OLLAMA_HOST=127.0.0.1 in the service environment before restarting. 3. FIREWALL: Block TCP/11434 at the host firewall level (iptables/nftables/security groups) — do not rely on application-level binding alone. 4. REVERSE PROXY WITH AUTH: Place an authenticated reverse proxy (nginx/Caddy with basic auth or mTLS) in front of any Ollama instance that must be network-accessible. 5. AUDIT: Run `ollama list` and cross-reference model hashes against known-good pull manifests; any unexpected model is a potential IOC. 6. DETECT: Alert on unexpected HTTP POST to /api/pull, /api/delete, or /api/copy from any source other than localhost in network logs or WAF rules.
What systems are affected by CVE-2025-63389?
This vulnerability affects the following AI/ML architecture patterns: model serving, agent frameworks, RAG pipelines, LLM API integrations, on-prem AI infrastructure.
What is the CVSS score for CVE-2025-63389?
CVE-2025-63389 has a CVSS v3.1 base score of 9.8 (CRITICAL). The EPSS exploitation probability is 0.19%.
Technical Details
NVD Description
A critical authentication bypass vulnerability exists in Ollama platform's API endpoints in versions prior to and including v0.12.3. The platform exposes multiple API endpoints without requiring authentication, enabling remote attackers to perform unauthorized model management operations.
Exploitation Scenario
An attacker performs an internet-wide scan for port 11434 or targets a known enterprise IP range. Upon finding an exposed Ollama instance, they issue a POST /api/pull with a payload pointing to an attacker-controlled registry hosting a backdoored GGUF model that mimics llama3 or mistral. The legitimate model is silently overwritten. Subsequent LLM queries from the enterprise's internal tools — RAG pipelines, code assistants, chatbots — now interact with the poisoned model, which is capable of prompt injection steering, data exfiltration via crafted responses, or jailbreak facilitation. Alternatively, the attacker deletes all loaded models via POST /api/delete, causing immediate denial of service to all AI-dependent workflows with no authentication trail in any SIEM.
Weaknesses (CWE)
CVSS Vector
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H References
Timeline
Related Vulnerabilities
CVE-2026-44007 9.1 vm2: sandbox escape via nesting:true enables RCE
Same package: ollama CVE-2024-37032 8.8 Ollama: path traversal enables RCE via model blob API
Same package: ollama CVE-2024-39720 8.2 Ollama: OOB read in GGUF parser enables remote DoS
Same package: ollama CVE-2024-39719 7.5 Ollama: file existence oracle via api/create errors
Same package: ollama CVE-2024-45436 7.5 Ollama: ZIP path traversal exposes host filesystem
Same package: ollama
AI Threat Alert