Benchmark HIGH
Chunyang Li, Zifeng Kang, Junwei Zhang +4 more
The adoption of Vision-Language Models (VLMs) in embodied AI agents, while being effective, brings safety concerns such as jailbreaking. Prior work...
5 months ago cs.CR cs.CY cs.RO
PDF
Attack HIGH
Zhen Sun, Zongmin Zhang, Deqi Liang +8 more
As LLMs become more common, non-expert users can pose risks, prompting extensive research into jailbreak attacks. However, most existing black-box...
5 months ago cs.CR cs.AI
PDF
Attack HIGH
Yijun Yang, Lichao Wang, Jianping Zhang +3 more
The growing misuse of Vision-Language Models (VLMs) has led providers to deploy multiple safeguards, including alignment tuning, system prompts, and...
Attack HIGH
Yige Li, Zhe Li, Wei Zhao +4 more
Backdoor attacks pose a serious threat to the secure deployment of large language models (LLMs), enabling adversaries to implant hidden behaviors...
5 months ago cs.CR cs.AI
PDF
Survey HIGH
Strahinja Janjusevic, Anna Baron Garcia, Sohrob Kazerounian
Generative AI is reshaping offensive cybersecurity by enabling autonomous red team agents that can plan, execute, and adapt during penetration tests....
5 months ago cs.CR cs.AI
PDF
Attack HIGH
Zhihan Ren, Lijun He, Jiaxi Liang +3 more
Split DNNs enable edge devices by offloading intensive computation to a cloud server, but this paradigm exposes privacy vulnerabilities, as the...
Attack HIGH
Piercosma Bisconti, Matteo Prandi, Federico Pierucci +7 more
We present evidence that adversarial poetry functions as a universal single-turn jailbreak technique for Large Language Models (LLMs). Across 25...
5 months ago cs.CL cs.AI
PDF
Attack HIGH
Badrinath Ramakrishnan, Akshaya Balaji
Retrieval-augmented generation (RAG) systems have become widely used for enhancing large language model capabilities, but they introduce significant...
5 months ago cs.CR cs.AI
PDF
Survey HIGH
Zimo Ji, Xunguang Wang, Zongjie Li +6 more
Large Language Model (LLM)-based agents with function-calling capabilities are increasingly deployed, but remain vulnerable to Indirect Prompt...
5 months ago cs.CR cs.AI
PDF
Benchmark HIGH
Henry Wong, Clement Fung, Weiran Lin +3 more
To autonomously control vehicles, driving agents use outputs from a combination of machine-learning (ML) models, controller logic, and custom...
5 months ago cs.CR cs.CV cs.LG
PDF
Attack HIGH
Xin Yi, Yue Li, Dongsheng Shi +3 more
Large Language Models (LLMs) are increasingly integrated into educational applications. However, they remain vulnerable to jailbreak and fine-tuning...
Attack HIGH
Zhengchunmin Dai, Jiaxiong Tang, Peng Sun +2 more
In decentralized machine learning paradigms such as Split Federated Learning (SFL) and its variant U-shaped SFL, the server's capabilities are...
5 months ago cs.CR cs.AI cs.LG
PDF
Attack HIGH
Eric Xue, Ruiyi Zhang, Pengtao Xie
Modern language models remain vulnerable to backdoor attacks via poisoned data, where training inputs containing a trigger are paired with a target...
5 months ago cs.CR cs.CL cs.LG
PDF
Attack HIGH
Hajun Kim, Hyunsik Na, Daeseon Choi
As the use of large language models (LLMs) continues to expand, ensuring their safety and robustness has become a critical challenge. In particular,...
Attack HIGH
Ajesh Koyatan Chathoth, Stephen Lee
Sensor data-based recognition systems are widely used in various applications, such as gait-based authentication and human activity recognition...
5 months ago cs.CR cs.LG
PDF
Attack HIGH
Yule Liu, Heyi Zhang, Jinyi Zheng +6 more
Membership inference attacks (MIAs) on large language models (LLMs) pose significant privacy risks across various stages of model training. Recent...
5 months ago cs.CR cs.AI cs.CL
PDF
Tool HIGH
Badhan Chandra Das, Md Tasnim Jawad, Md Jueal Mia +2 more
Large Vision Language Models (LVLMs) demonstrate strong capabilities in multimodal reasoning and many real-world applications, such as visual...
Attack HIGH
Pascal Zimmer, Ghassan Karame
In this paper, we present the first detailed analysis of how training hyperparameters -- such as learning rate, weight decay, momentum, and batch...
5 months ago cs.LG cs.CR cs.CV
PDF
Tool HIGH
Siyang Cheng, Gaotian Liu, Rui Mei +7 more
The rapid adoption of large language models (LLMs) has brought both transformative applications and new security risks, including jailbreak attacks...
5 months ago cs.CR cs.AI cs.CL
PDF
Attack HIGH
Mukkesh Ganesh, Kaushik Iyer, Arun Baalaaji Sankar Ananthan
The Key Value(KV) cache is an important component for efficient inference in autoregressive Large Language Models (LLMs), but its role as a...
5 months ago cs.CR cs.AI
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial