Attack HIGH
Yuanbo Li, Tianyang Xu, Cong Hu +3 more
The rapid progress of Multi-Modal Large Language Models (MLLMs) has significantly advanced downstream applications. However, this progress also...
Attack HIGH
Yuanbo Li, Tianyang Xu, Cong Hu +3 more
The rapid progress of Multi-Modal Large Language Models (MLLMs) has significantly advanced downstream applications. However, this progress also...
Attack HIGH
Junchen Li, Chao Qi, Rongzheng Wang +5 more
Retrieval-Augmented Generation (RAG) enhances the capabilities of large language models (LLMs) by incorporating external knowledge, but its reliance...
Attack HIGH
Wang Jian, Shen Hong, Ke Wei +1 more
While federated learning protects data privacy, it also makes the model update process vulnerable to long-term stealthy perturbations. Existing...
2 months ago cs.LG cs.AI cs.CR
PDF
Attack HIGH
Yangyang Wei, Yijie Xu, Zhenyuan Li +2 more
Multi-Agent System is emerging as the \textit{de facto} standard for complex task orchestration. However, its reliance on autonomous execution and...
2 months ago cs.CR cs.MA
PDF
Attack HIGH
Neha Nagaraja, Lan Zhang, Zhilong Wang +2 more
Multimodal Large Language Models (MLLMs) integrate vision and text to power applications, but this integration introduces new vulnerabilities. We...
2 months ago cs.CV cs.AI cs.CR
PDF
Attack HIGH
Zhi Xu, Jiaqi Li, Xiaotong Zhang +2 more
Large language models (LLMs) have achieved remarkable success across diverse applications but remain vulnerable to jailbreak attacks, where attackers...
Attack HIGH
Peter Horvath, Ilia Shumailov, Lukasz Chmielewski +2 more
The multi-million dollar investment required for modern machine learning (ML) has made large ML models a prime target for theft. In response, the...
Attack HIGH
Jiayao Wang, Mohammad Maruf Hasan, Yiping Zhang +5 more
Self-Supervised Learning (SSL) has emerged as a significant paradigm in representation learning thanks to its ability to learn without extensive...
Attack HIGH
Huw Day, Adrianna Jezierska, Jessica Woodgate
Large Language Models have intensified the scale and strategic manipulation of political discourse on social media, leading to conflict escalation....
2 months ago cs.HC cs.AI
PDF
Attack HIGH
Duoxun Tang, Dasen Dai, Jiyao Wang +3 more
Video-LLMs are increasingly deployed in safety-critical applications but are vulnerable to Energy-Latency Attacks (ELAs) that exhaust computational...
2 months ago cs.CV cs.AI
PDF
Attack HIGH
Xinyu Huang, Qiang Yang, Leming Shen +2 more
Embodied Large Language Models (LLMs) enable AI agents to interact with the physical world through natural language instructions and actions....
Attack HIGH
Jiayao Wang, Yiping Zhang, Mohammad Maruf Hasan +5 more
Self-supervised diffusion models learn high-quality visual representations via latent space denoising. However, their representation layer poses a...
2 months ago cs.CR cs.LG
PDF
Attack HIGH
Oluseyi Olukola, Nick Rahimi
Machine learning based network intrusion detection systems are vulnerable to adversarial attacks that degrade classification performance under both...
2 months ago cs.CR cs.AI
PDF
Attack HIGH
Hsin Lin, Yan-Lun Chen, Ren-Hung Hwang +1 more
Backdoor attacks pose a critical threat to the security of deep neural networks, yet existing efforts on universal backdoors often rely on visually...
2 months ago cs.CR cs.CV cs.LG
PDF
Attack HIGH
Yilian Liu, Xiaojun Jia, Guoshun Nan +6 more
Multimodal Large Language Models (MLLMs) have achieved remarkable performance but remain vulnerable to jailbreak attacks that can induce harmful...
2 months ago cs.CV cs.AI cs.CR
PDF
Attack HIGH
Swapnil Parekh
Image captioning models are encoder-decoder architectures trained on large-scale image-text datasets, making them susceptible to adversarial attacks....
2 months ago cs.CV cs.AI
PDF
Attack HIGH
Linxi Jiang, Zhijie Liu, Haotian Luo +1 more
Browser-use agents are widely used for everyday tasks. They enable automated interaction with web pages through structured DOM based interfaces or...
2 months ago cs.CR cs.AI
PDF
Attack HIGH
Kennedy Edemacu, Mohammad Mahdi Shokri
Retrieval-augmented generation (RAG) has emerged as a powerful paradigm for enhancing multimodal large language models by grounding their responses...
2 months ago cs.CR cs.AI
PDF
Attack HIGH
Xun Huang, Simeng Qin, Xiaoshuang Jia +6 more
As Large Language Models (LLMs) are increasingly used, their security risks have drawn increasing attention. Existing research reveals that LLMs are...
2 months ago cs.AI cs.CR
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial