CVE-2026-25083: GROWI: Missing Auth allows unauthorized operations
UNKNOWNGROWI deployments using OpenAI assistant integration expose all AI conversation threads to any authenticated user who can guess or enumerate an assistant identifier. Patch to v7.4.6+ immediately; if patching is not possible, disable AI assistant features or restrict GROWI access to trusted users only. Treat all historical threads in affected deployments as potentially compromised—audit for sensitive data disclosure.
Risk Assessment
Medium-High for organizations running GROWI with OpenAI integration. Requires authentication, which limits exposure to insider threats and compromised accounts, but no additional privilege is needed beyond a valid login. The attack surface is the AI assistant identifier, which may be discoverable through normal application use, shared links, or brute force. The tamper vector elevates risk beyond read-only disclosure: an attacker can inject content into other users' AI threads, enabling context poisoning attacks that could manipulate subsequent AI responses seen by victims.
Severity & Risk
Recommended Action
6 steps-
PATCH
Upgrade GROWI to v7.4.6 or later (vendor advisory growi.co.jp/news/41 and JVN#46373837).
-
If immediate patching is not possible, disable OpenAI integration in GROWI admin settings.
-
AUDIT
Review OpenAI thread access logs for unauthorized cross-user access; check for injected content in shared AI threads.
-
SCOPE
Inventory all GROWI instances in your environment; prioritize internet-facing or contractor-accessible instances.
-
DETECTION
Alert on API calls to GROWI's AI thread/message endpoints from users who are not the thread owner—check application logs for cross-user thread ID access patterns.
-
ROTATE
If sensitive data was shared in AI threads, consider the information compromised and rotate any credentials or revoke sensitive content discussed therein.
CISA SSVC Assessment
Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2026-25083?
GROWI deployments using OpenAI assistant integration expose all AI conversation threads to any authenticated user who can guess or enumerate an assistant identifier. Patch to v7.4.6+ immediately; if patching is not possible, disable AI assistant features or restrict GROWI access to trusted users only. Treat all historical threads in affected deployments as potentially compromised—audit for sensitive data disclosure.
Is CVE-2026-25083 actively exploited?
No confirmed active exploitation of CVE-2026-25083 has been reported, but organizations should still patch proactively.
How to fix CVE-2026-25083?
1. PATCH: Upgrade GROWI to v7.4.6 or later (vendor advisory growi.co.jp/news/41 and JVN#46373837). 2. If immediate patching is not possible, disable OpenAI integration in GROWI admin settings. 3. AUDIT: Review OpenAI thread access logs for unauthorized cross-user access; check for injected content in shared AI threads. 4. SCOPE: Inventory all GROWI instances in your environment; prioritize internet-facing or contractor-accessible instances. 5. DETECTION: Alert on API calls to GROWI's AI thread/message endpoints from users who are not the thread owner—check application logs for cross-user thread ID access patterns. 6. ROTATE: If sensitive data was shared in AI threads, consider the information compromised and rotate any credentials or revoke sensitive content discussed therein.
What systems are affected by CVE-2026-25083?
This vulnerability affects the following AI/ML architecture patterns: LLM API integrations, AI assistant platforms, Collaborative knowledge bases with AI features, Multi-tenant AI chat/thread systems, OpenAI Assistants API consumers.
What is the CVSS score for CVE-2026-25083?
No CVSS score has been assigned yet.
Technical Details
NVD Description
GROWI OpenAI thread/message API endpoints do not perform authorization. Affected are v7.4.5 and earlier versions. A logged-in user who knows a shared AI assistant's identifier may view and/or tamper the other user's threads/messages.
Exploitation Scenario
An authenticated GROWI user (e.g., a low-privilege contractor) discovers or enumerates the OpenAI assistant thread identifier of a more privileged user—for example, by observing identifiers in URLs, shared links, or through brute-force of sequential/predictable IDs. The attacker calls the unprotected API endpoint directly to read the victim's AI conversation history, harvesting sensitive business context, credentials, or strategic plans discussed with the AI assistant. In a more sophisticated variant, the attacker injects crafted messages into the victim's thread, poisoning the AI context so that future responses to the victim contain attacker-controlled misinformation or malicious instructions—a server-side prompt injection with persistent effect.
Weaknesses (CWE)
References
Timeline
Related Vulnerabilities
CVE-2026-33663 10.0 n8n: member role steals plaintext HTTP credentials
Same attack type: Data Leakage CVE-2026-34938 10.0 praisonaiagents: sandbox bypass enables full host RCE
Same attack type: Prompt Injection CVE-2025-5120 10.0 smolagents: sandbox escape enables unauthenticated RCE
Same attack type: Data Leakage CVE-2023-3765 10.0 MLflow: path traversal allows arbitrary file read
Same attack type: Data Leakage CVE-2026-25052 9.9 n8n: security flaw enables exploitation
Same attack type: Data Leakage
AI Threat Alert