CVE-2025-48944

GHSA-vrq3-r879-7m65 MEDIUM
Published May 30, 2025

vLLM is an inference and serving engine for large language models (LLMs). In version 0.8.0 up to but excluding 0.9.0, the vLLM backend used with the /v1/chat/completions OpenAPI endpoint fails to...

Full analysis pending. Showing NVD description excerpt.

Affected Systems

Package Ecosystem Vulnerable Range Patched
vllm pip >= 0.8.0, < 0.9.0 0.9.0
vllm pip No patch

Severity & Risk

CVSS 3.1
6.5 / 10
EPSS
0.1%
chance of exploitation in 30 days
KEV Status
Not in KEV
Sophistication
N/A

Recommended Action

Patch available

Update vllm to version 0.9.0

Compliance Impact

Compliance analysis pending. Sign in for full compliance mapping when available.

Technical Details

NVD Description

vLLM is an inference and serving engine for large language models (LLMs). In version 0.8.0 up to but excluding 0.9.0, the vLLM backend used with the /v1/chat/completions OpenAPI endpoint fails to validate unexpected or malformed input in the "pattern" and "type" fields when the tools functionality is invoked. These inputs are not validated before being compiled or parsed, causing a crash of the inference worker with a single request. The worker will remain down until it is restarted. Version 0.9.0 fixes the issue.

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H

Timeline

Published
May 30, 2025
Last Modified
July 1, 2025
First Seen
May 30, 2025