CVE-2025-15514: ollama: security flaw enables exploitation

HIGH PoC AVAILABLE CISA: TRACK*
Published January 12, 2026
CISO Take

Any organization running Ollama with multi-modal (vision) models and network-accessible API endpoints is exposed to a trivially exploitable denial-of-service — no authentication, no special knowledge, single crafted HTTP request. Patch immediately or enforce network-level access controls restricting /api/chat to trusted sources. If running Ollama in production AI pipelines or agent frameworks, treat this as a service availability incident risk until mitigated.

Risk Assessment

HIGH risk for production Ollama deployments. CVSS 7.5 is accurate: network-reachable, zero authentication, low complexity, no user interaction required. The vulnerability requires only that the attacker can send an HTTP POST to /api/chat — trivially achievable against any internet-exposed instance. Ollama is among the most widely deployed local LLM runners, and many organizations expose it without authentication under the assumption it is internal. Compounding factor: vision-capable models (LLaVA, llama3.2-vision, bakllava) are increasingly used in production agentic workflows. Not in CISA KEV and no confirmed active exploitation, but the bar to exploit is extremely low and public PoC exists via the huntr advisory.

Affected Systems

Package Ecosystem Vulnerable Range Patched
ollama pip No patch
170.6K 1.4K dependents Pushed 6d ago 5% patched ~0d to patch Full package profile →

Do you use ollama? You're affected.

Severity & Risk

CVSS 3.1
7.5 / 10
EPSS
0.1%
chance of exploitation in 30 days
Higher than 26% of all CVEs
Exploitation Status
Exploit Available
Exploitation: MEDIUM
Sophistication
Trivial
Exploitation Confidence
medium
CISA SSVC: Public PoC
Public PoC indexed (trickest/cve)
Composite signal derived from CISA KEV, CISA SSVC, EPSS, trickest/cve, and Nuclei templates.

Attack Surface

AV AC PR UI S C I A
AV Network
AC Low
PR None
UI None
S Unchanged
C None
I None
A High

Recommended Action

7 steps
  1. PATCH

    Monitor ollama/ollama GitHub for a release > 0.13.5 with this fix; apply immediately when available.

  2. NETWORK ISOLATION (immediate): Bind Ollama to localhost (default: 127.0.0.1:11434) — if exposed on 0.0.0.0, restrict via firewall rules or reverse proxy with authentication.

  3. API GATEWAY

    Place a WAF or reverse proxy in front of /api/chat that validates Content-Type and enforces payload size limits on image data fields.

  4. INPUT VALIDATION WORKAROUND

    If modifying application code, validate that base64-decoded image data has a valid MIME magic byte signature (PNG: 0x89504E47, JPEG: 0xFFD8FF) before forwarding to Ollama.

  5. MONITORING

    Alert on Ollama runner process crashes or unexpected restarts (systemd unit restart events, Docker container restarts). Alert on /api/chat requests containing images/base64 data from unexpected source IPs.

  6. PROCESS SUPERVISION

    Ensure Ollama runs with automatic restart (systemd Restart=on-failure or Docker restart policy) to minimize downtime from exploitation attempts.

  7. AUDIT EXPOSURE

    Run 'curl http://your-ollama-host:11434/api/tags' from an external network — if it responds, your instance is publicly exposed.

CISA SSVC Assessment

Decision Track*
Exploitation poc
Automatable Yes
Technical Impact partial

Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Article 15 - Accuracy, robustness and cybersecurity
ISO 42001
8.4 - AI System Operation A.6.2.6 - AI system robustness and availability
NIST AI RMF
MANAGE 2.2 - Mechanisms to sustain AI system function MANAGE-2.2 - Risk Treatment — AI System Availability
OWASP LLM Top 10
LLM04 - Model Denial of Service

Frequently Asked Questions

What is CVE-2025-15514?

Any organization running Ollama with multi-modal (vision) models and network-accessible API endpoints is exposed to a trivially exploitable denial-of-service — no authentication, no special knowledge, single crafted HTTP request. Patch immediately or enforce network-level access controls restricting /api/chat to trusted sources. If running Ollama in production AI pipelines or agent frameworks, treat this as a service availability incident risk until mitigated.

Is CVE-2025-15514 actively exploited?

Proof-of-concept exploit code is publicly available for CVE-2025-15514, increasing the risk of exploitation.

How to fix CVE-2025-15514?

1. PATCH: Monitor ollama/ollama GitHub for a release > 0.13.5 with this fix; apply immediately when available. 2. NETWORK ISOLATION (immediate): Bind Ollama to localhost (default: 127.0.0.1:11434) — if exposed on 0.0.0.0, restrict via firewall rules or reverse proxy with authentication. 3. API GATEWAY: Place a WAF or reverse proxy in front of /api/chat that validates Content-Type and enforces payload size limits on image data fields. 4. INPUT VALIDATION WORKAROUND: If modifying application code, validate that base64-decoded image data has a valid MIME magic byte signature (PNG: 0x89504E47, JPEG: 0xFFD8FF) before forwarding to Ollama. 5. MONITORING: Alert on Ollama runner process crashes or unexpected restarts (systemd unit restart events, Docker container restarts). Alert on /api/chat requests containing images/base64 data from unexpected source IPs. 6. PROCESS SUPERVISION: Ensure Ollama runs with automatic restart (systemd Restart=on-failure or Docker restart policy) to minimize downtime from exploitation attempts. 7. AUDIT EXPOSURE: Run 'curl http://your-ollama-host:11434/api/tags' from an external network — if it responds, your instance is publicly exposed.

What systems are affected by CVE-2025-15514?

This vulnerability affects the following AI/ML architecture patterns: model serving, agent frameworks, multi-modal inference pipelines, local AI development environments, RAG pipelines with image ingestion.

What is the CVSS score for CVE-2025-15514?

CVE-2025-15514 has a CVSS v3.1 base score of 7.5 (HIGH). The EPSS exploitation probability is 0.09%.

Technical Details

NVD Description

Ollama 0.11.5-rc0 through current version 0.13.5 contain a null pointer dereference vulnerability in the multi-modal model image processing functionality. When processing base64-encoded image data via the /api/chat endpoint, the application fails to validate that the decoded data represents valid media before passing it to the mtmd_helper_bitmap_init_from_buf function. This function can return NULL for malformed input, but the code does not check this return value before dereferencing the pointer in subsequent operations. A remote attacker can exploit this by sending specially crafted base64 image data that decodes to invalid media, causing a segmentation fault and crashing the runner process. This results in a denial of service condition where the model becomes unavailable to all users until the service is restarted.

Exploitation Scenario

Attacker scans for exposed Ollama instances on port 11434 using Shodan or masscan. They identify a target running a vision model (discoverable via GET /api/tags which lists loaded models). They craft a POST request to /api/chat with model set to a multimodal model and an 'images' array containing a base64-encoded payload that decodes to invalid binary data (e.g., random bytes or a truncated file). Ollama passes this to mtmd_helper_bitmap_init_from_buf, which returns NULL due to invalid media format. The subsequent pointer dereference triggers a segfault, crashing the runner process. The attacker can repeat this every time the service restarts, maintaining a persistent DoS condition. No credentials, no AI/ML knowledge, no exploit development required — a 10-line Python script suffices. In an enterprise context where developers have enabled Ollama on their workstations for AI-assisted coding with vision capabilities, this could also be triggered by a malicious web page or document that causes the local Ollama API to be queried with crafted image data.

Weaknesses (CWE)

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H

Timeline

Published
January 12, 2026
Last Modified
January 21, 2026
First Seen
January 12, 2026

Related Vulnerabilities