CVE-2025-62372: vllm: security flaw enables exploitation

GHSA-pmqf-x6x8-p7qw MEDIUM
Published November 21, 2025
CISO Take

If your organization runs vLLM for multimodal inference, patch to 0.11.1 immediately — any authenticated API user can crash the entire serving engine with a single malformed request, taking down all dependent services. This is a hard availability risk with no workaround other than restricting API access to fully trusted callers. Patch-or-restrict is the only acceptable posture.

Risk Assessment

Medium-severity availability risk with high practical impact for production AI serving environments. CVSS A:H and AC:L means the DoS is reliable and repeatable; any low-privilege user (including trial accounts or internal dev teams) can trigger it. EPSS 0.00083 indicates no active exploitation yet, but the technique is trivially reproducible once known. For organizations using vLLM as the backbone of multimodal AI services, the blast radius is the entire inference fleet — not just a single request.

Affected Systems

Package Ecosystem Vulnerable Range Patched
vllm pip No patch
78.9K 126 dependents Pushed 6d ago 56% patched ~32d to patch Full package profile →
vllm pip >= 0.5.5, < 0.11.1 0.11.1
78.9K 126 dependents Pushed 6d ago 56% patched ~32d to patch Full package profile →

Severity & Risk

CVSS 3.1
6.5 / 10
EPSS
0.1%
chance of exploitation in 30 days
Higher than 25% of all CVEs
Exploitation Status
No known exploitation
Sophistication
Trivial

Attack Surface

AV AC PR UI S C I A
AV Network
AC Low
PR Low
UI None
S Unchanged
C None
I None
A High

Recommended Action

1 step
  1. 1) Patch: upgrade vLLM to >= 0.11.1 (pip install vllm==0.11.1). 2) If patching is delayed, restrict vLLM API access to known trusted callers via network policy or API gateway — remove low-privilege or anonymous access. 3) Add input validation at the API gateway layer to reject embedding payloads with unexpected shape dimensions before they reach vLLM. 4) Implement process supervision (systemd, Kubernetes liveness probes) to auto-restart the vLLM engine on crash and alert on restart events. 5) Monitor vLLM process crash logs for unexpected terminations as a detection signal for exploitation attempts.

CISA SSVC Assessment

Decision Track
Exploitation none
Automatable No
Technical Impact partial

Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.

Classification

Compliance Impact

This CVE is relevant to:

EU AI Act
Article 15 - Accuracy, robustness and cybersecurity
ISO 42001
A.6.2.6 - AI system input validation and robustness
NIST AI RMF
MANAGE-2.4 - Residual risks are managed
OWASP LLM Top 10
LLM04 - Model Denial of Service

Frequently Asked Questions

What is CVE-2025-62372?

If your organization runs vLLM for multimodal inference, patch to 0.11.1 immediately — any authenticated API user can crash the entire serving engine with a single malformed request, taking down all dependent services. This is a hard availability risk with no workaround other than restricting API access to fully trusted callers. Patch-or-restrict is the only acceptable posture.

Is CVE-2025-62372 actively exploited?

No confirmed active exploitation of CVE-2025-62372 has been reported, but organizations should still patch proactively.

How to fix CVE-2025-62372?

1) Patch: upgrade vLLM to >= 0.11.1 (pip install vllm==0.11.1). 2) If patching is delayed, restrict vLLM API access to known trusted callers via network policy or API gateway — remove low-privilege or anonymous access. 3) Add input validation at the API gateway layer to reject embedding payloads with unexpected shape dimensions before they reach vLLM. 4) Implement process supervision (systemd, Kubernetes liveness probes) to auto-restart the vLLM engine on crash and alert on restart events. 5) Monitor vLLM process crash logs for unexpected terminations as a detection signal for exploitation attempts.

What systems are affected by CVE-2025-62372?

This vulnerability affects the following AI/ML architecture patterns: model serving, multimodal AI pipelines, inference endpoints, RAG pipelines with multimodal inputs, LLM-as-a-service platforms.

What is the CVSS score for CVE-2025-62372?

CVE-2025-62372 has a CVSS v3.1 base score of 6.5 (MEDIUM). The EPSS exploitation probability is 0.09%.

Technical Details

NVD Description

vLLM is an inference and serving engine for large language models (LLMs). From version 0.5.5 to before 0.11.1, users can crash the vLLM engine serving multimodal models by passing multimodal embedding inputs with correct ndim but incorrect shape (e.g. hidden dimension is wrong), regardless of whether the model is intended to support such inputs (as defined in the Supported Models page). This issue has been patched in version 0.11.1.

Exploitation Scenario

An attacker with any level of API access to a vLLM multimodal endpoint — including a free-tier or internal dev account — crafts a POST request to the inference API submitting a multimodal embedding tensor with the correct number of dimensions (correct ndim) but wrong hidden dimension size. vLLM's improper array index validation (CWE-129) fails to catch the shape mismatch, causing an unhandled exception that crashes the engine process. The attacker can repeat this in a loop to cause sustained denial of service, or use it as a one-shot to disrupt a critical inference pipeline during a sensitive business window.

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H

Timeline

Published
November 21, 2025
Last Modified
December 4, 2025
First Seen
November 21, 2025

Related Vulnerabilities