CVE-2026-25083
UNKNOWNGROWI deployments using OpenAI assistant integration expose all AI conversation threads to any authenticated user who can guess or enumerate an assistant identifier. Patch to v7.4.6+ immediately; if patching is not possible, disable AI assistant features or restrict GROWI access to trusted users only. Treat all historical threads in affected deployments as potentially compromised—audit for sensitive data disclosure.
Severity & Risk
Recommended Action
- 1. PATCH: Upgrade GROWI to v7.4.6 or later (vendor advisory growi.co.jp/news/41 and JVN#46373837). 2. If immediate patching is not possible, disable OpenAI integration in GROWI admin settings. 3. AUDIT: Review OpenAI thread access logs for unauthorized cross-user access; check for injected content in shared AI threads. 4. SCOPE: Inventory all GROWI instances in your environment; prioritize internet-facing or contractor-accessible instances. 5. DETECTION: Alert on API calls to GROWI's AI thread/message endpoints from users who are not the thread owner—check application logs for cross-user thread ID access patterns. 6. ROTATE: If sensitive data was shared in AI threads, consider the information compromised and rotate any credentials or revoke sensitive content discussed therein.
Classification
Compliance Impact
This CVE is relevant to:
Technical Details
NVD Description
GROWI OpenAI thread/message API endpoints do not perform authorization. Affected are v7.4.5 and earlier versions. A logged-in user who knows a shared AI assistant's identifier may view and/or tamper the other user's threads/messages.
Exploitation Scenario
An authenticated GROWI user (e.g., a low-privilege contractor) discovers or enumerates the OpenAI assistant thread identifier of a more privileged user—for example, by observing identifiers in URLs, shared links, or through brute-force of sequential/predictable IDs. The attacker calls the unprotected API endpoint directly to read the victim's AI conversation history, harvesting sensitive business context, credentials, or strategic plans discussed with the AI assistant. In a more sophisticated variant, the attacker injects crafted messages into the victim's thread, poisoning the AI context so that future responses to the victim contain attacker-controlled misinformation or malicious instructions—a server-side prompt injection with persistent effect.