Attack HIGH
Eric Xue, Ruiyi Zhang, Pengtao Xie
Modern language models remain vulnerable to backdoor attacks via poisoned data, where training inputs containing a trigger are paired with a target...
5 months ago cs.CR cs.CL cs.LG
PDF
Attack HIGH
Hajun Kim, Hyunsik Na, Daeseon Choi
As the use of large language models (LLMs) continues to expand, ensuring their safety and robustness has become a critical challenge. In particular,...
Attack HIGH
Ajesh Koyatan Chathoth, Stephen Lee
Sensor data-based recognition systems are widely used in various applications, such as gait-based authentication and human activity recognition...
5 months ago cs.CR cs.LG
PDF
Attack HIGH
Yule Liu, Heyi Zhang, Jinyi Zheng +6 more
Membership inference attacks (MIAs) on large language models (LLMs) pose significant privacy risks across various stages of model training. Recent...
5 months ago cs.CR cs.AI cs.CL
PDF
Tool HIGH
Badhan Chandra Das, Md Tasnim Jawad, Md Jueal Mia +2 more
Large Vision Language Models (LVLMs) demonstrate strong capabilities in multimodal reasoning and many real-world applications, such as visual...
Attack HIGH
Pascal Zimmer, Ghassan Karame
In this paper, we present the first detailed analysis of how training hyperparameters -- such as learning rate, weight decay, momentum, and batch...
5 months ago cs.LG cs.CR cs.CV
PDF
Tool HIGH
Siyang Cheng, Gaotian Liu, Rui Mei +7 more
The rapid adoption of large language models (LLMs) has brought both transformative applications and new security risks, including jailbreak attacks...
5 months ago cs.CR cs.AI cs.CL
PDF
Attack HIGH
Mukkesh Ganesh, Kaushik Iyer, Arun Baalaaji Sankar Ananthan
The Key Value(KV) cache is an important component for efficient inference in autoregressive Large Language Models (LLMs), but its role as a...
5 months ago cs.CR cs.AI
PDF
Attack HIGH
Yunhao Chen, Xin Wang, Juncheng Li +5 more
Automated red teaming frameworks for Large Language Models (LLMs) have become increasingly sophisticated, yet they share a fundamental limitation:...
5 months ago cs.CL cs.CR
PDF
Attack HIGH
Haotian Jin, Yang Li, Haihui Fan +3 more
Backdoor attacks pose a serious threat to the security of large language models (LLMs), causing them to exhibit anomalous behavior under specific...
5 months ago cs.CR cs.AI
PDF
Attack HIGH
Samuel Nathanson, Rebecca Williams, Cynthia Matuszek
Large language models (LLMs) increasingly operate in multi-agent and safety-critical settings, raising open questions about how their vulnerabilities...
5 months ago cs.LG cs.AI cs.CL
PDF
Attack HIGH
Jiaji Ma, Puja Trivedi, Danai Koutra
Text-attributed graphs (TAGs), which combine structural and textual node information, are ubiquitous across many domains. Recent work integrates...
5 months ago cs.CR cs.LG
PDF
Attack HIGH
Hasini Jayathilaka
Prompt injection attacks are an emerging threat to large language models (LLMs), enabling malicious users to manipulate outputs through carefully...
Attack HIGH
Rui Wang, Zeming Wei, Xiyue Zhang +1 more
Deep Neural Networks (DNNs) are known to be vulnerable to various adversarial perturbations. To address the safety concerns arising from these...
5 months ago cs.LG cs.AI cs.CR
PDF
Attack HIGH
Gil Goren, Shahar Katz, Lior Wolf
Large Language Models (LLMs) are vulnerable to adversarial attacks that bypass safety guidelines and generate harmful content. Mitigating these...
Defense HIGH
Jie Chen, Liangmin Wang
Fuzzing is a widely used technique for detecting vulnerabilities in smart contracts, which generates transaction sequences to explore the execution...
5 months ago cs.CR cs.SE
PDF
Benchmark HIGH
Jiayu Li, Yunhan Zhao, Xiang Zheng +4 more
Vision-Language-Action (VLA) models enable robots to interpret natural-language instructions and perform diverse tasks, yet their integration of...
5 months ago cs.CR cs.AI cs.CV
PDF
Attack HIGH
Hao Li, Jiajun He, Guangshuo Wang +3 more
Retrieval-Augmented Generation (RAG) enhances large language models by integrating external knowledge, but reliance on proprietary or sensitive...
Survey HIGH
Gioliano de Oliveira Braga, Pedro Henrique dos Santos Rocha, Rafael Pimenta de Mattos Paixão +3 more
Wi-Fi Channel State Information (CSI) has been repeatedly proposed as a biometric modality, often with reports of high accuracy and operational...
5 months ago cs.CR cs.LG cs.NI
PDF
Attack HIGH
Lama Sleem, Jerome Francois, Lujun Li +3 more
Jailbreak attacks designed to bypass safety mechanisms pose a serious threat by prompting LLMs to generate harmful or inappropriate content, despite...
5 months ago cs.CR cs.AI
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial