Attack HIGH
Rui Wang, Zeming Wei, Xiyue Zhang +1 more
Deep Neural Networks (DNNs) are known to be vulnerable to various adversarial perturbations. To address the safety concerns arising from these...
5 months ago cs.LG cs.AI cs.CR
PDF
Attack HIGH
Gil Goren, Shahar Katz, Lior Wolf
Large Language Models (LLMs) are vulnerable to adversarial attacks that bypass safety guidelines and generate harmful content. Mitigating these...
Attack HIGH
Hao Li, Jiajun He, Guangshuo Wang +3 more
Retrieval-Augmented Generation (RAG) enhances large language models by integrating external knowledge, but reliance on proprietary or sensitive...
Attack HIGH
Lama Sleem, Jerome Francois, Lujun Li +3 more
Jailbreak attacks designed to bypass safety mechanisms pose a serious threat by prompting LLMs to generate harmful or inappropriate content, despite...
5 months ago cs.CR cs.AI
PDF
Attack HIGH
Runpeng Geng, Yanting Wang, Chenlong Yin +3 more
Long context LLMs are vulnerable to prompt injection, where an attacker can inject an instruction in a long context to induce an LLM to generate an...
6 months ago cs.CR cs.AI cs.CL
PDF
Attack HIGH
Srikant Panda, Avinash Rai
Large Language Models (LLMs) are commonly evaluated for robustness against paraphrased or semantically equivalent jailbreak prompts, yet little...
6 months ago cs.CL cs.AI
PDF
Attack HIGH
Shuaitong Liu, Renjue Li, Lijia Yu +3 more
Recent advances in Chain-of-Thought (CoT) prompting have substantially improved the reasoning capabilities of large language models (LLMs), but have...
6 months ago cs.CR cs.AI
PDF
Attack HIGH
Yudong Yang, Xuezhen Zhang, Zhifeng Han +6 more
Recent progress in LLMs has enabled understanding of audio signals, but has also exposed new safety risks arising from complex audio inputs that are...
6 months ago cs.SD cs.AI
PDF
Attack HIGH
Zihan Wang, Guansong Pang, Wenjun Miao +2 more
Recent advances in Large Visual Language Models (LVLMs) have demonstrated impressive performance across various vision-language tasks by leveraging...
Attack HIGH
Shigeki Kusaka, Keita Saito, Mikoto Kudo +3 more
Large language models (LLMs) are increasingly deployed in real-world systems, making it critical to understand their vulnerabilities. While data...
6 months ago cs.LG cs.AI
PDF
Attack HIGH
Hongyi Li, Chengxuan Zhou, Chu Wang +5 more
Large Audio-language Models (LAMs) have recently enabled powerful speech-based interactions by coupling audio encoders with Large Language Models...
Attack HIGH
Tiago Machado, Maysa Malfiza Garcia de Macedo, Rogerio Abreu de Paula +5 more
This work aims to investigate how different Large Language Models (LLMs) alignment methods affect the models' responses to prompt attacks. We...
Attack HIGH
Yuxuan Zhou, Yuzhao Peng, Yang Bai +7 more
Large Vision-Language Models (VLMs) are susceptible to jailbreak attacks: researchers have developed a variety of attack strategies that can...
Attack HIGH
Yaxin Xiao, Qingqing Ye, Zi Liang +4 more
Machine learning models constitute valuable intellectual property, yet remain vulnerable to model extraction attacks (MEA), where adversaries...
6 months ago cs.CR cs.CV cs.LG
PDF
Attack HIGH
Xingyu Li, Xiaolei Liu, Cheng Liu +4 more
As large language models (LLMs) scale, their inference incurs substantial computational resources, exposing them to energy-latency attacks, where...
6 months ago cs.CR cs.AI cs.CL
PDF
Attack HIGH
Hui Lu, Yi Yu, Song Xia +5 more
Large-scale Video Foundation Models (VFMs) has significantly advanced various video-related tasks, either through task-specific models or Multi-modal...
6 months ago cs.CV cs.CR
PDF
Attack HIGH
Reem Al-Saidi, Erman Ayday, Ziad Kobti
This study investigates embedding reconstruction attacks in large language models (LLMs) applied to genomic sequences, with a specific focus on how...
Attack HIGH
Alina Fastowski, Bardh Prenkaj, Yuxiao Li +1 more
LLMs are now an integral part of information retrieval. As such, their role as question answering chatbots raises significant concerns due to their...
6 months ago cs.CR cs.AI cs.CL
PDF
Attack HIGH
Yigitcan Kaya, Anton Landerer, Stijn Pletinckx +3 more
Prompt injection attacks pose a critical threat to large language models (LLMs), with prior work focusing on cutting-edge LLM applications like...
6 months ago cs.CR cs.AI
PDF
Attack HIGH
Janet Jenq, Hongda Shen
Multimodal product retrieval systems in e-commerce platforms rely on effectively combining visual and textual signals to improve search relevance and...
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial