OpenClaw: HTTP scope bypass enables model enumeration
OpenClaw's HTTP /v1/models endpoint skips the scope enforcement that its WebSocket RPC path applies, letting any authenticated operator—regardless of role—enumerate deployed model metadata. Upgrade to 2026.3.24 immediately; until patched, block /v1/models via reverse proxy ACL for non-read-scoped operators. The deeper concern is that this breaks your operator RBAC model at the transport level—scope boundaries you think are enforced are not.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| openclaw | npm | <= 2026.3.23 | 2026.3.24 |
Do you use openclaw? You're affected.
Severity & Risk
Recommended Action
- Patch: upgrade the openclaw npm package to 2026.3.24 or later immediately.
- Workaround (pre-patch): block HTTP GET /v1/models and /v1/models/:id at the reverse proxy or WAF layer for principals lacking operator.read scope.
- Audit: review all operator credential assignments and verify scope assignments follow least-privilege—enumerate which operators have non-read scopes currently in use.
- Detection: alert on HTTP GET /v1/models requests from principals that are simultaneously rejected on WebSocket models.list—this delta pattern signals deliberate bypass behavior.
- Regression test post-patch: confirm operator.approvals without read is rejected on HTTP /v1/models (positive and negative controls per patch advisory).
Classification
Compliance Impact
This CVE is relevant to:
Technical Details
NVD Description
> Fixed in OpenClaw 2026.3.24, the current shipping release. ## Summary The OpenAI-compatible HTTP endpoint `/v1/models` accepts bearer auth but does not enforce operator method scopes. In contrast, the WebSocket RPC path enforces `operator.read` for `models.list`. A caller connected with `operator.approvals` (no read scope) is rejected for `models.list` (`missing scope: operator.read`) but can still enumerate model metadata through HTTP `/v1/models`. Confirmed on current `main` at commit `06de515b6c42816b62ec752e1c221cab67b38501`. ## Details The WS control-plane path enforces role/scope checks centrally before dispatching methods. For non-admin operators, this includes required method scopes such as `operator.read` for `models.list`. The HTTP compatibility path for `/v1/models` performs bearer authorization and then returns model metadata; it does not apply an equivalent scope check. As reproduced, a caller with only `operator.approvals` can: 1. connect successfully, 2. fail `models.list` over WS with `missing scope: operator.read`, 3. fetch `/v1/models` over HTTP with status 200 and model data. This is a cross-surface authorization inconsistency where the stricter WS policy can be bypassed via HTTP. ## Impact - Callers lacking `operator.read` can still enumerate gateway model metadata through HTTP compatibility routes. - Breaks scope model consistency between WS RPC and HTTP surfaces. - Weakens least-privilege expectations for operators granted non-read scopes. ## Patch Suggestion ### 1) Enforce read scope on `/v1/models` routes Apply a scope gate equivalent to `models.list` before serving `/v1/models` or `/v1/models/:id`. ### 2) Reuse centralized scope-authorization helper for HTTP compatibility endpoints Use the same operator scope logic used by WS dispatch (`authorizeOperatorScopesForMethod(...)`) to prevent policy drift. ### 3) Add regression tests Keep this PoC and add explicit negative/positive controls: - `operator.approvals` without read is rejected on HTTP `/v1/models`. - `operator.read` is accepted on both WS `models.list` and HTTP `/v1/models`. ## Credit Reported by @zpbrent.
Exploitation Scenario
An attacker with a compromised or legitimately-issued operator.approvals bearer token connects to the OpenClaw gateway. The WebSocket models.list call is rejected with 'missing scope: operator.read'. The attacker pivots to HTTP, sending GET /v1/models with the identical bearer token and receives HTTP 200 with full model inventory—names, versions, capability metadata. Armed with this enumeration, the attacker now knows exactly which LLM endpoints are deployed and can mount targeted inference probing, cost-harvesting, or further API enumeration against high-value model endpoints while remaining within their ostensibly limited operator role.
AI Threat Alert