CVE-2023-34094: ChuanhuChatGPT: config exposure leaks API keys

MEDIUM
Published June 2, 2023
CISO Take

ChuanhuChatGPT deployments without authentication configured expose their config.json file to any unauthenticated network attacker, directly leaking LLM API keys stored in plaintext. The attack is trivial — no credentials, no user interaction, just a network request — and the exposed file typically contains API keys for OpenAI or other LLM providers. While not in CISA KEV and scored medium (CVSS 5.3), the practical blast radius exceeds the rating: stolen keys enable unauthorized model inference, cost harvesting against the victim's account, and potential access to associated provider resources such as fine-tuned models or uploaded files. Update to commit bfac445 or later, enable access authentication immediately, and rotate any API keys that may have been exposed.

Sources: NVD GitHub Advisory ATLAS

Risk Assessment

Practical risk exceeds the CVSS 5.3 medium rating. All four network-facing exploitability factors are worst-case: AV:N, AC:L, PR:N, UI:N — making this trivially exploitable by any unauthenticated actor with network reach. The real damage is downstream: stolen LLM API keys enable cost harvesting, unauthorized inference, and potential pivot into associated provider accounts. Self-hosted deployments in enterprise or research environments assuming network perimeter protection are silently exposed if the service is accidentally internet-facing or if an internal threat actor is present.

Affected Systems

Package Ecosystem Vulnerable Range Patched
chuanhuchatgpt pip No patch

Do you use chuanhuchatgpt? You're affected.

Severity & Risk

CVSS 3.1
5.3 / 10
EPSS
N/A
Exploitation Status
No known exploitation
Sophistication
Trivial

Recommended Action

  1. Patch: upgrade to commit bfac445 or any release after 20230526 — the vulnerability is fixed there.
  2. Workaround: enable access authentication on all ChuanhuChatGPT deployments immediately; do not rely on network perimeter alone.
  3. Key rotation: rotate all LLM API keys stored in config.json for any instance potentially reachable from untrusted networks.
  4. Detection: audit web server access logs for GET requests to config.json; any HTTP 200 response to an unexpected IP indicates potential compromise.
  5. Inventory: scan internal network for unauthenticated ChuanhuChatGPT instances via port scanning and UI fingerprinting.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Article 15 - Accuracy, robustness and cybersecurity
ISO 42001
A.6.2 - AI system information security controls
NIST AI RMF
GOVERN 6.2 - Organizational policies for AI risk management
OWASP LLM Top 10
LLM06 - Sensitive Information Disclosure

Technical Details

NVD Description

ChuanhuChatGPT is a graphical user interface for ChatGPT and many large language models. A vulnerability in versions 20230526 and prior allows unauthorized access to the config.json file of the privately deployed ChuanghuChatGPT project, when authentication is not configured. The attacker can exploit this vulnerability to steal the API keys in the configuration file. The vulnerability has been fixed in commit bfac445. As a workaround, setting up access authentication can help mitigate the vulnerability.

Exploitation Scenario

An attacker scans internet-facing hosts for ChuanhuChatGPT deployments identifiable by UI fingerprinting or known default ports. On a deployment without authentication configured, they issue a direct HTTP GET to the config.json endpoint. The server returns the file in plaintext, including OpenAI or other LLM provider API keys. The attacker then uses these keys to run automated workloads under the victim's account — exhausting API credits, querying the model with sensitive prompts, or enumerating provider-side assets such as fine-tuned models and uploaded files. If the key carries organization-level permissions, the attacker gains visibility into the full API usage history and any data stored with the provider.

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:L/I:N/A:N

Timeline

Published
June 2, 2023
Last Modified
November 21, 2024
First Seen
June 2, 2023

Related Vulnerabilities