CVE-2026-44556

GHSA-hp5m-24vp-vq2q HIGH
Published May 8, 2026

## Summary The /responses endpoint in the OpenAI router accepts any authenticated user and forwards requests directly to upstream LLM providers without enforcing per-model access control. While the primary chat completion endpoint (generate_chat_completion) checks model ownership, group...

Full CISO analysis pending enrichment.

Affected Systems

Package Ecosystem Vulnerable Range Patched
open-webui pip <= 0.8.12 0.9.0
135.3K Pushed 8d ago 58% patched ~9d to patch Full package profile →

Do you use open-webui? You're affected.

Severity & Risk

CVSS 3.1
7.1 / 10
EPSS
N/A
Exploitation Status
No known exploitation
Sophistication
N/A

Attack Surface

AV AC PR UI S C I A
AV Network
AC Low
PR Low
UI None
S Unchanged
C Low
I None
A High

Recommended Action

Patch available

Update open-webui to version 0.9.0

Compliance Impact

Compliance analysis pending. Sign in for full compliance mapping when available.

Frequently Asked Questions

What is CVE-2026-44556?

Open WebUI's responses passthrough endpoint lacks access control authorization

Is CVE-2026-44556 actively exploited?

No confirmed active exploitation of CVE-2026-44556 has been reported, but organizations should still patch proactively.

How to fix CVE-2026-44556?

Update to patched version: open-webui 0.9.0.

What is the CVSS score for CVE-2026-44556?

CVE-2026-44556 has a CVSS v3.1 base score of 7.1 (HIGH).

Technical Details

NVD Description

## Summary The /responses endpoint in the OpenAI router accepts any authenticated user and forwards requests directly to upstream LLM providers without enforcing per-model access control. While the primary chat completion endpoint (generate_chat_completion) checks model ownership, group membership, and AccessGrants before allowing a request, the /responses proxy only validates that the user has a valid session via get_verified_user. This allows any authenticated user — regardless of role or group assignment — to interact with any model configured on the instance by sending a POST request to /api/openai/responses with an arbitrary model ID. ## Impact As per OWASP TOP 10 LLM: - **Model Denial of Service (OWASP LLM04):** An unauthorized user can submit resource-intensive requests to expensive models (e.g., o1-pro, GPT-4o) that were explicitly restricted by the administrator. In shared deployments, this can exhaust API budgets or rate limits, causing total service disruption for all legitimate users. - **Model Theft (OWASP LLM10):** If the instance proxies access to fine-tuned or self-hosted models, unauthorized users can freely interact with them, enabling capability extraction or model distillation without authorization. - **Access Policy Bypass:** Administrators lose the ability to enforce cost-tier restrictions, team-based model assignments, or compliance boundaries through the existing access control system. The endpoint is a raw passthrough proxy and does not resolve workspace model configurations (system prompts, knowledge bases, RAG pipelines). Therefore, workspace-specific confidential data is not directly exposed through this vector. PR: https://github.com/open-webui/open-webui/pull/23481

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:L/I:N/A:H

Timeline

Published
May 8, 2026
Last Modified
May 8, 2026
First Seen
May 8, 2026

Related Vulnerabilities