vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. When vLLM is configured to use Mooncake, unsafe deserialization exposed directly over ZMQ/TCP on all network...
Full analysis pending. Showing NVD description excerpt.
Affected Systems
| Package | Ecosystem | Vulnerable Range | Patched |
|---|---|---|---|
| vllm | pip | >= 0.6.5, < 0.8.0 | 0.8.0 |
| vllm | pip | — | No patch |
Severity & Risk
Recommended Action
Patch available
Update vllm to version 0.8.0
Compliance Impact
Compliance analysis pending. Sign in for full compliance mapping when available.
Technical Details
NVD Description
vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. When vLLM is configured to use Mooncake, unsafe deserialization exposed directly over ZMQ/TCP on all network interfaces will allow attackers to execute remote code on distributed hosts. This is a remote code execution vulnerability impacting any deployments using Mooncake to distribute KV across distributed hosts. This vulnerability is fixed in 0.8.0.
Weaknesses (CWE)
CVSS Vector
CVSS:3.1/AV:A/AC:L/PR:L/UI:N/S:C/C:H/I:H/A:H References
- github.com/advisories/GHSA-x3m8-f7g5-qhm7
- github.com/pypa/advisory-database/tree/main/vulns/vllm/PYSEC-2025-63.yaml
- github.com/vllm-project/vllm/commit/288ca110f68d23909728627d3100e5a8db820aa2
- github.com/vllm-project/vllm/pull/14228
- github.com/vllm-project/vllm/security/advisories/GHSA-x3m8-f7g5-qhm7
- nvd.nist.gov/vuln/detail/CVE-2025-29783
- github.com/vllm-project/vllm/commit/288ca110f68d23909728627d3100e5a8db820aa2 Patch
- github.com/vllm-project/vllm/pull/14228 Issue Vendor
- github.com/vllm-project/vllm/security/advisories/GHSA-x3m8-f7g5-qhm7 Vendor