CVE-2025-63390: anythingllm: Missing Auth allows unauthenticated access

MEDIUM
Published December 18, 2025
CISO Take

If your organization runs AnythingLLM v1.8.5, assume your system prompts and full AI workspace configurations are publicly readable — no credentials required. This is a recon goldmine: attackers enumerate your prompts, model choices, and agent configurations before launching targeted prompt injection or social engineering attacks. Patch immediately or block unauthenticated access to /api/workspaces at the network/reverse-proxy layer.

Risk Assessment

Effective risk is higher than CVSS 5.3 suggests. While confidentiality impact is scored 'low', system prompts routinely contain sensitive business logic, security guardrails, proprietary instructions, and occasionally embedded credentials or API references. The combination of network-accessible, zero-auth, zero-complexity exploitation targeting an LLM platform makes this a high-value recon vector. Organizations exposed to the internet are at immediate risk; internal-only deployments face insider/lateral-movement risk.

Affected Systems

Package Ecosystem Vulnerable Range Patched
anythingllm No patch

Do you use anythingllm? You're affected.

Severity & Risk

CVSS 3.1
5.3 / 10
EPSS
0.0%
chance of exploitation in 30 days
Higher than 11% of all CVEs
Exploitation Status
No known exploitation
Sophistication
Trivial

Attack Surface

AV AC PR UI S C I A
AV Network
AC Low
PR None
UI None
S Unchanged
C Low
I None
A None

Recommended Action

6 steps
  1. PATCH

    Upgrade AnythingLLM to the latest version — check https://github.com/Mintplex-Labs/anything-llm/releases for a fix addressing CWE-306 on /api/workspaces.

  2. IMMEDIATE WORKAROUND

    Block unauthenticated access to /api/workspaces at reverse proxy/WAF/firewall level — require valid session tokens before routing to this endpoint.

  3. AUDIT

    Review all system prompts (openAiPrompt fields) for embedded credentials, internal URLs, sensitive instructions, or security bypass information that should now be considered compromised.

  4. ROTATE

    If system prompts reference API keys, internal hostnames, or credentials, rotate them now.

  5. DETECT

    Query logs for unauthenticated GET requests to /api/workspaces — any hits from external IPs indicate active exploitation.

  6. HARDEN

    Apply network segmentation — AnythingLLM should not be internet-accessible unless explicitly required.

CISA SSVC Assessment

Decision Track
Exploitation none
Automatable Yes
Technical Impact partial

Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Art. 15 - Accuracy, Robustness and Cybersecurity Article 15 - Accuracy, robustness and cybersecurity Article 9 - Risk management system
ISO 42001
A.6.2.6 - Access control to AI systems A.8.4 - Protection of AI system information
NIST AI RMF
GOVERN-6.1 - Policies and procedures for AI risk MANAGE 2.4 - Risk Treatment and Residual Risk Management PROTECT-2.1 - AI system configuration and sensitive data protection
OWASP LLM Top 10
LLM02:2025 - Sensitive Information Disclosure LLM07:2025 - System Prompt Leakage

Frequently Asked Questions

What is CVE-2025-63390?

If your organization runs AnythingLLM v1.8.5, assume your system prompts and full AI workspace configurations are publicly readable — no credentials required. This is a recon goldmine: attackers enumerate your prompts, model choices, and agent configurations before launching targeted prompt injection or social engineering attacks. Patch immediately or block unauthenticated access to /api/workspaces at the network/reverse-proxy layer.

Is CVE-2025-63390 actively exploited?

No confirmed active exploitation of CVE-2025-63390 has been reported, but organizations should still patch proactively.

How to fix CVE-2025-63390?

1. PATCH: Upgrade AnythingLLM to the latest version — check https://github.com/Mintplex-Labs/anything-llm/releases for a fix addressing CWE-306 on /api/workspaces. 2. IMMEDIATE WORKAROUND: Block unauthenticated access to /api/workspaces at reverse proxy/WAF/firewall level — require valid session tokens before routing to this endpoint. 3. AUDIT: Review all system prompts (openAiPrompt fields) for embedded credentials, internal URLs, sensitive instructions, or security bypass information that should now be considered compromised. 4. ROTATE: If system prompts reference API keys, internal hostnames, or credentials, rotate them now. 5. DETECT: Query logs for unauthenticated GET requests to /api/workspaces — any hits from external IPs indicate active exploitation. 6. HARDEN: Apply network segmentation — AnythingLLM should not be internet-accessible unless explicitly required.

What systems are affected by CVE-2025-63390?

This vulnerability affects the following AI/ML architecture patterns: LLM application platforms, RAG pipelines, Agent frameworks, API-exposed LLM services, Internal AI assistants.

What is the CVSS score for CVE-2025-63390?

CVE-2025-63390 has a CVSS v3.1 base score of 5.3 (MEDIUM). The EPSS exploitation probability is 0.04%.

Technical Details

NVD Description

An authentication bypass vulnerability exists in AnythingLLM v1.8.5 in via the /api/workspaces endpoint. The endpoint fails to implement proper authentication checks, allowing unauthenticated remote attackers to enumerate and retrieve detailed information about all configured workspaces. Exposed data includes: workspace identifiers (id, name, slug), AI model configurations (chatProvider, chatModel, agentProvider), system prompts (openAiPrompt), operational parameters (temperature, history length, similarity thresholds), vector search settings, chat modes, and timestamps.

Exploitation Scenario

An attacker discovers an AnythingLLM instance via Shodan/Censys or targeted reconnaissance. They send a single unauthenticated HTTP GET to /api/workspaces and receive a JSON response listing every configured workspace with full metadata: the names and slugs reveal business context, chatProvider/chatModel reveal the exact LLM in use, and openAiPrompt exposes the system prompt verbatim — including security restrictions and persona instructions. The attacker uses the system prompt content to craft precise prompt injection payloads that bypass stated restrictions, knowing exactly what guardrails to circumvent. They also identify agentProvider settings to understand what tools the agent can invoke, planning further exploitation via agent tool abuse.

Weaknesses (CWE)

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:L/I:N/A:N

Timeline

Published
December 18, 2025
Last Modified
January 22, 2026
First Seen
December 18, 2025

Related Vulnerabilities