Time-to-Patch Analysis

How fast do AI/ML packages respond to security vulnerabilities? Benchmarking 37 packages with 3+ known CVEs.

Based on NVD publication-to-modification data. Updated continuously.

37
Packages Analyzed
120.1d
Industry Average
0.0d
Fastest (Streamlit)
1403.7d
Slowest (TensorFlow)
# Package CVEs Patched Patch Rate Avg Days
1 Streamlit 13 1 8% 0.0d
2 LoLLMs 6 2 33% 0.9d
3 Anthropic Python 3 2 67% 1.0d
4 n8n 112 15 13% 1.3d
5 XGrammar 4 4 100% 5.3d
6 Fickling 14 14 100% 5.4d
7 smolagents 5 2 40% 9.7d
8 MLX 4 2 50% 10.9d
9 picklescan 62 59 95% 11.8d
10 MONAI 4 4 100% 18.4d
11 BentoML 16 6 38% 19.0d
12 Open WebUI 33 11 33% 24.0d
13 Keras 18 9 50% 26.8d
14 LangChain Core 5 5 100% 27.7d
15 ONNX 11 7 64% 34.8d
16 vLLM 74 34 46% 37.4d
17 LangChain Community 4 4 100% 47.6d
18 LlamaIndex Core 7 7 100% 49.7d
19 SageMaker 4 4 100% 52.7d
20 Langflow 73 11 15% 54.2d
21 ExecuTorch 13 12 92% 64.1d
22 MLflow 68 16 24% 68.1d
23 LlamaIndex 6 6 100% 96.4d
24 LiteLLM 10 4 40% 104.8d
25 Transformers 44 17 39% 107.1d
26 Gradio 81 21 26% 110.4d
27 PyTorch 41 3 7% 142.1d
28 LLaMA Factory 4 3 75% 166.5d
29 Label Studio 5 4 80% 181.4d
30 Ray 8 6 75% 217.2d
31 LangChain 54 7 13% 366.1d
32 PyTorch Lightning 3 2 67% 496.4d
33 TensorFlow 1904 7 0% 1403.7d
34 OpenAI Python 6 0 0% -
35 Ollama 26 0 0% -
36 scikit-learn 3 0 0% -
37 LlamaIndex 6 0 0% -

Monitor your stack's patch velocity

Get real-time alerts when CVEs in your AI stack get patched. Track patch rates and response times for the packages you depend on.

Start Monitoring