If your teams use Mesop to build internal LLM chatbots or AI interfaces, patch to 0.14.1 now — this requires only low privileges over the network. Beyond the straightforward DoS, the AI-specific risk is serious: an attacker can overwrite conversation role assignments (user/assistant/system) at runtime, enabling jailbreak attacks against any LLM your Mesop app fronts. This is a textbook example of a classic web vulnerability (prototype/class pollution) creating a novel AI attack surface.
Risk Assessment
High risk for teams using Mesop to expose LLM interfaces internally or externally. CVSS 8.1 with network vector, low complexity, low privileges, no user interaction — exploitation is straightforward for any authenticated user. EPSS at 3.1% indicates PoC-level feasibility without confirmed in-the-wild exploitation yet. The DoS path is trivial; the jailbreak path requires understanding of LLM role structures but is accessible to moderately skilled adversaries. Not in CISA KEV but the AI-specific impact warrants elevated urgency for AI/ML teams.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| mesop | pip | < 0.14.1 | 0.14.1 |
Do you use mesop? You're affected.
Severity & Risk
Attack Surface
Recommended Action
1 step-
1) Upgrade Mesop to 0.14.1 immediately — patch is available and the fix is straightforward. 2) If patching is blocked, restrict Mesop application access to trusted internal networks and enforce strong authentication ahead of the app layer. 3) Audit all internal AI tools built on Mesop — inventory them via pip freeze or dependency scanning. 4) Review LLM interaction logs for anomalous role assignments or unexpected system prompt overrides as indicators of exploitation. 5) Add Mesop to your SCA/dependency scanning pipeline to catch future vulnerabilities automatically.
CISA SSVC Assessment
Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2025-30358?
If your teams use Mesop to build internal LLM chatbots or AI interfaces, patch to 0.14.1 now — this requires only low privileges over the network. Beyond the straightforward DoS, the AI-specific risk is serious: an attacker can overwrite conversation role assignments (user/assistant/system) at runtime, enabling jailbreak attacks against any LLM your Mesop app fronts. This is a textbook example of a classic web vulnerability (prototype/class pollution) creating a novel AI attack surface.
Is CVE-2025-30358 actively exploited?
No confirmed active exploitation of CVE-2025-30358 has been reported, but organizations should still patch proactively.
How to fix CVE-2025-30358?
1) Upgrade Mesop to 0.14.1 immediately — patch is available and the fix is straightforward. 2) If patching is blocked, restrict Mesop application access to trusted internal networks and enforce strong authentication ahead of the app layer. 3) Audit all internal AI tools built on Mesop — inventory them via pip freeze or dependency scanning. 4) Review LLM interaction logs for anomalous role assignments or unexpected system prompt overrides as indicators of exploitation. 5) Add Mesop to your SCA/dependency scanning pipeline to catch future vulnerabilities automatically.
What systems are affected by CVE-2025-30358?
This vulnerability affects the following AI/ML architecture patterns: agent frameworks, model serving, AI chatbot interfaces, RAG pipelines.
What is the CVSS score for CVE-2025-30358?
CVE-2025-30358 has a CVSS v3.1 base score of 8.1 (HIGH). The EPSS exploitation probability is 3.69%.
Technical Details
NVD Description
Mesop is a Python-based UI framework that allows users to build web applications. A class pollution vulnerability in Mesop prior to version 0.14.1 allows attackers to overwrite global variables and class attributes in certain Mesop modules during runtime. This vulnerability could directly lead to a denial of service (DoS) attack against the server. Additionally, it could also result in other severe consequences given the application's implementation, such as identity confusion, where an attacker could impersonate an assistant or system role within conversations. This impersonation could potentially enable jailbreak attacks when interacting with large language models (LLMs). Just like the Javascript's prototype pollution, this vulnerability could leave a way for attackers to manipulate the intended data-flow or control-flow of the application at runtime and lead to severe consequences like remote code execution when gadgets are available. Users should upgrade to version 0.14.1 to obtain a fix for the issue.
Exploitation Scenario
An adversary with a low-privilege account on a Mesop-based internal LLM chat tool crafts a malicious HTTP request that exploits the class pollution vulnerability. By overwriting Mesop's global conversation state, they inject a forged 'system' role message containing instructions to ignore all previous safety guidelines. The LLM, receiving what it interprets as a legitimate system-level directive, complies — effectively jailbroken without any prompt injection against the LLM itself. The attacker then exfiltrates sensitive data the LLM has access to via RAG or tool calls, or escalates to DoS by corrupting critical runtime state and crashing the server.
Weaknesses (CWE)
CVSS Vector
CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:H/A:H References
Timeline
Related Vulnerabilities
CVE-2026-21858 10.0 n8n: Input Validation flaw enables exploitation
Same attack type: Auth Bypass GHSA-vvpj-8cmc-gx39 10.0 picklescan: security flaw enables exploitation
Same attack type: Auth Bypass CVE-2025-2828 10.0 LangChain RequestsToolkit: SSRF exposes cloud metadata
Same attack type: Auth Bypass CVE-2025-53767 10.0 Azure OpenAI: SSRF EoP, no auth required (CVSS 10)
Same attack type: Auth Bypass CVE-2026-26030 10.0 semantic-kernel: Code Injection enables RCE
Same attack type: Auth Bypass
AI Threat Alert