CVE-2024-11037: gpt_academic: path traversal exposes LLM API keys
UNKNOWN PoC AVAILABLE CISA: TRACK*Any internal deployment of gpt_academic on Windows is at risk of full OpenAI API key exposure via a single unauthenticated HTTP request. Rotate all API keys on affected instances immediately and update to a commit beyond 679352d. If you cannot patch now, block external access to gpt_academic at the network perimeter.
Risk Assessment
Effective severity is HIGH despite missing CVSS. The vulnerability is trivially exploitable (no authentication required, no special tooling), the payload is a crafted URL, and the impact is direct credential theft. Windows-specific path normalization quirks allow bypassing the blocked_paths allowlist. API key theft from config.py enables immediate financial harm (unauthorized LLM usage billed to the victim) and potential data exfiltration through the stolen API access.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| gpt_academic | pip | — | No patch |
Do you use gpt_academic? You're affected.
Severity & Risk
Recommended Action
6 steps-
IMMEDIATE
Rotate all API keys stored in config.py (OpenAI, any other services).
-
UPDATE
Pin deployment to a commit after 679352d or apply the patch from the huntr advisory.
-
NETWORK
Restrict gpt_academic to internal networks only; block public internet exposure.
-
DETECT
Search web server access logs for requests containing absolute Windows paths (e.g., C:/) or '%5C' patterns targeting /config.py.
-
HARDEN
Move API keys out of config.py into environment variables or a secrets manager.
-
AUDIT
Check OpenAI API usage logs for anomalous spike in requests or calls from unfamiliar IP ranges indicating prior exploitation.
CISA SSVC Assessment
Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2024-11037?
Any internal deployment of gpt_academic on Windows is at risk of full OpenAI API key exposure via a single unauthenticated HTTP request. Rotate all API keys on affected instances immediately and update to a commit beyond 679352d. If you cannot patch now, block external access to gpt_academic at the network perimeter.
Is CVE-2024-11037 actively exploited?
Proof-of-concept exploit code is publicly available for CVE-2024-11037, increasing the risk of exploitation.
How to fix CVE-2024-11037?
1. IMMEDIATE: Rotate all API keys stored in config.py (OpenAI, any other services). 2. UPDATE: Pin deployment to a commit after 679352d or apply the patch from the huntr advisory. 3. NETWORK: Restrict gpt_academic to internal networks only; block public internet exposure. 4. DETECT: Search web server access logs for requests containing absolute Windows paths (e.g., C:/) or '%5C' patterns targeting /config.py. 5. HARDEN: Move API keys out of config.py into environment variables or a secrets manager. 6. AUDIT: Check OpenAI API usage logs for anomalous spike in requests or calls from unfamiliar IP ranges indicating prior exploitation.
What systems are affected by CVE-2024-11037?
This vulnerability affects the following AI/ML architecture patterns: LLM API integrations, academic/research AI deployments, self-hosted AI assistants, API gateway proxies.
What is the CVSS score for CVE-2024-11037?
No CVSS score has been assigned yet.
Technical Details
NVD Description
A path traversal vulnerability exists in binary-husky/gpt_academic at commit 679352d, which allows an attacker to bypass the blocked_paths protection and read the config.py file containing sensitive information such as the OpenAI API key. This vulnerability is exploitable on Windows operating systems by accessing a specific URL that includes the absolute path of the project.
Exploitation Scenario
Attacker discovers a Windows-hosted gpt_academic instance via Shodan or Google dorking. They craft a request to a route that serves files, using a Windows absolute path (e.g., C:\path\to\gpt_academic\config.py) URL-encoded to bypass the blocked_paths check. The server returns config.py in plaintext. Attacker extracts the OpenAI API key, sets it in their own environment, and begins high-volume LLM inference — either for direct use or resale on underground markets — while the victim receives the invoice. Total exploitation time: under 2 minutes.
Weaknesses (CWE)
References
- huntr.com/bounties/91243fc1-f287-4f4b-8aa6-dfe3efff23e5 Exploit 3rd Party
Timeline
Related Vulnerabilities
CVE-2024-31224 9.8 gpt_academic: deserialization RCE, no auth required
Same package: gpt_academic CVE-2025-25185 7.5 gpt_academic: symlink traversal exposes all server files
Same package: gpt_academic CVE-2024-11031 7.5 GPT Academic: SSRF in Markdown plugin leaks credentials
Same package: gpt_academic CVE-2024-11030 7.5 GPT Academic: SSRF via unsanitized HotReload plugin
Same package: gpt_academic CVE-2024-10950 gpt_academic: RCE via unsandboxed prompt injection
Same package: gpt_academic
AI Threat Alert