If your organization runs vLLM for multimodal inference, patch to 0.11.1 immediately — any authenticated API user can crash the entire serving engine with a single malformed request, taking down all dependent services. This is a hard availability risk with no workaround other than restricting API access to fully trusted callers. Patch-or-restrict is the only acceptable posture.
Risk Assessment
Medium-severity availability risk with high practical impact for production AI serving environments. CVSS A:H and AC:L means the DoS is reliable and repeatable; any low-privilege user (including trial accounts or internal dev teams) can trigger it. EPSS 0.00083 indicates no active exploitation yet, but the technique is trivially reproducible once known. For organizations using vLLM as the backbone of multimodal AI services, the blast radius is the entire inference fleet — not just a single request.
Affected Systems
Severity & Risk
Attack Surface
Recommended Action
1 step-
1) Patch: upgrade vLLM to >= 0.11.1 (pip install vllm==0.11.1). 2) If patching is delayed, restrict vLLM API access to known trusted callers via network policy or API gateway — remove low-privilege or anonymous access. 3) Add input validation at the API gateway layer to reject embedding payloads with unexpected shape dimensions before they reach vLLM. 4) Implement process supervision (systemd, Kubernetes liveness probes) to auto-restart the vLLM engine on crash and alert on restart events. 5) Monitor vLLM process crash logs for unexpected terminations as a detection signal for exploitation attempts.
CISA SSVC Assessment
Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2025-62372?
If your organization runs vLLM for multimodal inference, patch to 0.11.1 immediately — any authenticated API user can crash the entire serving engine with a single malformed request, taking down all dependent services. This is a hard availability risk with no workaround other than restricting API access to fully trusted callers. Patch-or-restrict is the only acceptable posture.
Is CVE-2025-62372 actively exploited?
No confirmed active exploitation of CVE-2025-62372 has been reported, but organizations should still patch proactively.
How to fix CVE-2025-62372?
1) Patch: upgrade vLLM to >= 0.11.1 (pip install vllm==0.11.1). 2) If patching is delayed, restrict vLLM API access to known trusted callers via network policy or API gateway — remove low-privilege or anonymous access. 3) Add input validation at the API gateway layer to reject embedding payloads with unexpected shape dimensions before they reach vLLM. 4) Implement process supervision (systemd, Kubernetes liveness probes) to auto-restart the vLLM engine on crash and alert on restart events. 5) Monitor vLLM process crash logs for unexpected terminations as a detection signal for exploitation attempts.
What systems are affected by CVE-2025-62372?
This vulnerability affects the following AI/ML architecture patterns: model serving, multimodal AI pipelines, inference endpoints, RAG pipelines with multimodal inputs, LLM-as-a-service platforms.
What is the CVSS score for CVE-2025-62372?
CVE-2025-62372 has a CVSS v3.1 base score of 6.5 (MEDIUM). The EPSS exploitation probability is 0.09%.
Technical Details
NVD Description
vLLM is an inference and serving engine for large language models (LLMs). From version 0.5.5 to before 0.11.1, users can crash the vLLM engine serving multimodal models by passing multimodal embedding inputs with correct ndim but incorrect shape (e.g. hidden dimension is wrong), regardless of whether the model is intended to support such inputs (as defined in the Supported Models page). This issue has been patched in version 0.11.1.
Exploitation Scenario
An attacker with any level of API access to a vLLM multimodal endpoint — including a free-tier or internal dev account — crafts a POST request to the inference API submitting a multimodal embedding tensor with the correct number of dimensions (correct ndim) but wrong hidden dimension size. vLLM's improper array index validation (CWE-129) fails to catch the shape mismatch, causing an unhandled exception that crashes the engine process. The attacker can repeat this in a loop to cause sustained denial of service, or use it as a one-shot to disrupt a critical inference pipeline during a sensitive business window.
Weaknesses (CWE)
CVSS Vector
CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H References
- github.com/advisories/GHSA-pmqf-x6x8-p7qw
- nvd.nist.gov/vuln/detail/CVE-2025-62372
- github.com/vllm-project/vllm/commit/58fab50d82838d5014f4a14d991fdb9352c9c84b Patch
- github.com/vllm-project/vllm/pull/27204 Issue Patch Vendor
- github.com/vllm-project/vllm/pull/6613 Issue
- github.com/vllm-project/vllm/security/advisories/GHSA-pmqf-x6x8-p7qw Mitigation Vendor
Timeline
Related Vulnerabilities
CVE-2024-9053 9.8 vllm: RCE via unsafe pickle deserialization in RPC server
Same package: vllm CVE-2024-11041 9.8 vllm: RCE via unsafe pickle deserialization in MessageQueue
Same package: vllm CVE-2026-25960 9.8 vllm: SSRF allows internal network access
Same package: vllm CVE-2025-47277 9.8 vLLM: RCE via exposed TCPStore in distributed inference
Same package: vllm CVE-2025-32444 9.8 vLLM: RCE via pickle deserialization on ZeroMQ
Same package: vllm
AI Threat Alert