CVE-2024-11037: gpt_academic: path traversal exposes LLM API keys

UNKNOWN PoC AVAILABLE CISA: TRACK*
Published March 20, 2025
CISO Take

Any internal deployment of gpt_academic on Windows is at risk of full OpenAI API key exposure via a single unauthenticated HTTP request. Rotate all API keys on affected instances immediately and update to a commit beyond 679352d. If you cannot patch now, block external access to gpt_academic at the network perimeter.

Risk Assessment

Effective severity is HIGH despite missing CVSS. The vulnerability is trivially exploitable (no authentication required, no special tooling), the payload is a crafted URL, and the impact is direct credential theft. Windows-specific path normalization quirks allow bypassing the blocked_paths allowlist. API key theft from config.py enables immediate financial harm (unauthorized LLM usage billed to the victim) and potential data exfiltration through the stolen API access.

Affected Systems

Package Ecosystem Vulnerable Range Patched
gpt_academic pip No patch

Do you use gpt_academic? You're affected.

Severity & Risk

CVSS 3.1
N/A
EPSS
0.2%
chance of exploitation in 30 days
Higher than 36% of all CVEs
Exploitation Status
Exploit Available
Exploitation: MEDIUM
Sophistication
Trivial
Exploitation Confidence
medium
CISA SSVC: Public PoC
Public PoC indexed (trickest/cve)
Composite signal derived from CISA KEV, CISA SSVC, EPSS, trickest/cve, and Nuclei templates.

Recommended Action

6 steps
  1. IMMEDIATE

    Rotate all API keys stored in config.py (OpenAI, any other services).

  2. UPDATE

    Pin deployment to a commit after 679352d or apply the patch from the huntr advisory.

  3. NETWORK

    Restrict gpt_academic to internal networks only; block public internet exposure.

  4. DETECT

    Search web server access logs for requests containing absolute Windows paths (e.g., C:/) or '%5C' patterns targeting /config.py.

  5. HARDEN

    Move API keys out of config.py into environment variables or a secrets manager.

  6. AUDIT

    Check OpenAI API usage logs for anomalous spike in requests or calls from unfamiliar IP ranges indicating prior exploitation.

CISA SSVC Assessment

Decision Track*
Exploitation poc
Automatable No
Technical Impact partial

Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.

Classification

Compliance Impact

This CVE is relevant to:

ISO 42001
A.6.2 - AI System Access Control
NIST AI RMF
GOVERN 6.2 - Policies and procedures are in place to address AI risks MANAGE 2.2 - Mechanisms are in place to sustain treatment of AI risks
OWASP LLM Top 10
LLM02 - Sensitive Information Disclosure

Frequently Asked Questions

What is CVE-2024-11037?

Any internal deployment of gpt_academic on Windows is at risk of full OpenAI API key exposure via a single unauthenticated HTTP request. Rotate all API keys on affected instances immediately and update to a commit beyond 679352d. If you cannot patch now, block external access to gpt_academic at the network perimeter.

Is CVE-2024-11037 actively exploited?

Proof-of-concept exploit code is publicly available for CVE-2024-11037, increasing the risk of exploitation.

How to fix CVE-2024-11037?

1. IMMEDIATE: Rotate all API keys stored in config.py (OpenAI, any other services). 2. UPDATE: Pin deployment to a commit after 679352d or apply the patch from the huntr advisory. 3. NETWORK: Restrict gpt_academic to internal networks only; block public internet exposure. 4. DETECT: Search web server access logs for requests containing absolute Windows paths (e.g., C:/) or '%5C' patterns targeting /config.py. 5. HARDEN: Move API keys out of config.py into environment variables or a secrets manager. 6. AUDIT: Check OpenAI API usage logs for anomalous spike in requests or calls from unfamiliar IP ranges indicating prior exploitation.

What systems are affected by CVE-2024-11037?

This vulnerability affects the following AI/ML architecture patterns: LLM API integrations, academic/research AI deployments, self-hosted AI assistants, API gateway proxies.

What is the CVSS score for CVE-2024-11037?

No CVSS score has been assigned yet.

Technical Details

NVD Description

A path traversal vulnerability exists in binary-husky/gpt_academic at commit 679352d, which allows an attacker to bypass the blocked_paths protection and read the config.py file containing sensitive information such as the OpenAI API key. This vulnerability is exploitable on Windows operating systems by accessing a specific URL that includes the absolute path of the project.

Exploitation Scenario

Attacker discovers a Windows-hosted gpt_academic instance via Shodan or Google dorking. They craft a request to a route that serves files, using a Windows absolute path (e.g., C:\path\to\gpt_academic\config.py) URL-encoded to bypass the blocked_paths check. The server returns config.py in plaintext. Attacker extracts the OpenAI API key, sets it in their own environment, and begins high-volume LLM inference — either for direct use or resale on underground markets — while the victim receives the invoice. Total exploitation time: under 2 minutes.

Weaknesses (CWE)

Timeline

Published
March 20, 2025
Last Modified
July 31, 2025
First Seen
March 20, 2025

Related Vulnerabilities