Any unauthenticated attacker can crash your vLLM inference server with a single HTTP request containing a 1x1 pixel image if you're running Idefics3 multimodal models on versions 0.6.4–0.11.x — no credentials, no sophistication required. Patch to vLLM 0.12.0 immediately. If patching is delayed, add API gateway input validation to reject images below a minimum dimension threshold.
Risk Assessment
High operational risk for organizations running vLLM with Idefics3 multimodal models in production. CVSS 7.5 accurately reflects the threat profile: network-accessible, zero authentication, low attack complexity, complete availability loss. The EPSS score (0.021%) is currently low but irrelevant — the exploit requires no specialized knowledge and the barrier to weaponization is a single crafted HTTP request. Blast radius is total service termination rather than degraded performance. Exposure is scoped to Idefics3-specific deployments, but vLLM is widely adopted in enterprise AI serving infrastructure.
Affected Systems
Severity & Risk
Attack Surface
Recommended Action
5 steps-
PATCH
Upgrade vLLM to 0.12.0 or later — this is the definitive fix.
-
WORKAROUND (if patching is delayed): Implement API gateway or middleware input validation to reject images below a minimum dimension threshold (e.g., block images smaller than 32x32 pixels).
-
DETECTION
Monitor for abnormal inference server termination events, especially correlated with multimodal API requests containing small image payloads. Alert on process restart events in vLLM containers and track inference endpoint availability.
-
RESILIENCE
Verify vLLM containers have auto-restart policies and health checks to minimize per-attack downtime windows.
-
AUDIT
Enumerate all endpoints (internal and external) that accept image inputs routed to vLLM Idefics3 models and prioritize patching for public-facing instances.
CISA SSVC Assessment
Source: CISA Vulnrichment (SSVC v2.0). Decision based on the CISA Coordinator decision tree.
Classification
Compliance Impact
This CVE is relevant to:
Frequently Asked Questions
What is CVE-2026-22773?
Any unauthenticated attacker can crash your vLLM inference server with a single HTTP request containing a 1x1 pixel image if you're running Idefics3 multimodal models on versions 0.6.4–0.11.x — no credentials, no sophistication required. Patch to vLLM 0.12.0 immediately. If patching is delayed, add API gateway input validation to reject images below a minimum dimension threshold.
Is CVE-2026-22773 actively exploited?
Proof-of-concept exploit code is publicly available for CVE-2026-22773, increasing the risk of exploitation.
How to fix CVE-2026-22773?
1. PATCH: Upgrade vLLM to 0.12.0 or later — this is the definitive fix. 2. WORKAROUND (if patching is delayed): Implement API gateway or middleware input validation to reject images below a minimum dimension threshold (e.g., block images smaller than 32x32 pixels). 3. DETECTION: Monitor for abnormal inference server termination events, especially correlated with multimodal API requests containing small image payloads. Alert on process restart events in vLLM containers and track inference endpoint availability. 4. RESILIENCE: Verify vLLM containers have auto-restart policies and health checks to minimize per-attack downtime windows. 5. AUDIT: Enumerate all endpoints (internal and external) that accept image inputs routed to vLLM Idefics3 models and prioritize patching for public-facing instances.
What systems are affected by CVE-2026-22773?
This vulnerability affects the following AI/ML architecture patterns: model serving, multimodal AI pipelines, LLM inference infrastructure, API endpoints.
What is the CVSS score for CVE-2026-22773?
CVE-2026-22773 has a CVSS v3.1 base score of 7.5 (HIGH). The EPSS exploitation probability is 0.02%.
Technical Details
NVD Description
vLLM is an inference and serving engine for large language models (LLMs). In versions from 0.6.4 to before 0.12.0, users can crash the vLLM engine serving multimodal models that use the Idefics3 vision model implementation by sending a specially crafted 1x1 pixel image. This causes a tensor dimension mismatch that results in an unhandled runtime error, leading to complete server termination. This issue has been patched in version 0.12.0.
Exploitation Scenario
An attacker identifies a public-facing API or internal endpoint serving a multimodal LLM application built on vLLM — discoverable via job postings, GitHub repos, or API fingerprinting. With zero credentials required, the attacker constructs a multipart HTTP POST request containing a 1x1 pixel PNG image and submits it to any inference endpoint processing images via Idefics3. The vLLM server attempts to process the image, encounters a tensor dimension mismatch on the anomalous image shape, throws an unhandled runtime exception, and terminates the entire server process. Total exploit complexity: generate a 1x1 image (trivially done with any image library or even manually), send one HTTP request. The attacker can repeat this loop to maintain a sustained denial-of-service condition against any unpatched deployment.
Weaknesses (CWE)
CVSS Vector
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H References
Timeline
Related Vulnerabilities
CVE-2024-9053 9.8 vllm: RCE via unsafe pickle deserialization in RPC server
Same package: vllm CVE-2024-11041 9.8 vllm: RCE via unsafe pickle deserialization in MessageQueue
Same package: vllm CVE-2026-25960 9.8 vllm: SSRF allows internal network access
Same package: vllm CVE-2025-47277 9.8 vLLM: RCE via exposed TCPStore in distributed inference
Same package: vllm CVE-2025-32444 9.8 vLLM: RCE via pickle deserialization on ZeroMQ
Same package: vllm
AI Threat Alert