intended model, as it will open a different model each time. Additionally, an attacker can exploit this vulnerability to perform data model poisoning by creating a model with the same
llama-index-core insecurely handles temporary files
vLLM is an inference and serving engine for large language