Attack HIGH
Hajun Kim, Hyunsik Na, Daeseon Choi
As the use of large language models (LLMs) continues to expand, ensuring their safety and robustness has become a critical challenge. In particular,...
Attack HIGH
Ajesh Koyatan Chathoth, Stephen Lee
Sensor data-based recognition systems are widely used in various applications, such as gait-based authentication and human activity recognition...
5 months ago cs.CR cs.LG
PDF
Attack HIGH
Yule Liu, Heyi Zhang, Jinyi Zheng +6 more
Membership inference attacks (MIAs) on large language models (LLMs) pose significant privacy risks across various stages of model training. Recent...
5 months ago cs.CR cs.AI cs.CL
PDF
Attack HIGH
Pascal Zimmer, Ghassan Karame
In this paper, we present the first detailed analysis of how training hyperparameters -- such as learning rate, weight decay, momentum, and batch...
5 months ago cs.LG cs.CR cs.CV
PDF
Attack MEDIUM
Fuyao Zhang, Jiaming Zhang, Che Wang +6 more
The reliance of mobile GUI agents on Multimodal Large Language Models (MLLMs) introduces a severe privacy vulnerability: screenshots containing...
Attack MEDIUM
Ayush Chaudhary, Sisir Doppalpudi
The deployment of robust malware detection systems in big data environments requires careful consideration of both security effectiveness and...
5 months ago cs.CR cs.LG
PDF
Attack MEDIUM
Thomas Rivasseau
Current Large Language Model alignment research mostly focuses on improving model robustness against adversarial attacks and misbehavior by training...
5 months ago cs.CL cs.CR
PDF
Attack HIGH
Mukkesh Ganesh, Kaushik Iyer, Arun Baalaaji Sankar Ananthan
The Key Value(KV) cache is an important component for efficient inference in autoregressive Large Language Models (LLMs), but its role as a...
5 months ago cs.CR cs.AI
PDF
Attack HIGH
Yunhao Chen, Xin Wang, Juncheng Li +5 more
Automated red teaming frameworks for Large Language Models (LLMs) have become increasingly sophisticated, yet they share a fundamental limitation:...
5 months ago cs.CL cs.CR
PDF
Attack HIGH
Haotian Jin, Yang Li, Haihui Fan +3 more
Backdoor attacks pose a serious threat to the security of large language models (LLMs), causing them to exhibit anomalous behavior under specific...
5 months ago cs.CR cs.AI
PDF
Attack HIGH
Samuel Nathanson, Rebecca Williams, Cynthia Matuszek
Large language models (LLMs) increasingly operate in multi-agent and safety-critical settings, raising open questions about how their vulnerabilities...
5 months ago cs.LG cs.AI cs.CL
PDF
Attack MEDIUM
Onkar Shelar, Travis Desell
Large Language Models remain vulnerable to adversarial prompts that elicit toxic content even after safety alignment. We present ToxSearch, a...
5 months ago cs.NE cs.AI cs.CL
PDF
Attack HIGH
Jiaji Ma, Puja Trivedi, Danai Koutra
Text-attributed graphs (TAGs), which combine structural and textual node information, are ubiquitous across many domains. Recent work integrates...
5 months ago cs.CR cs.LG
PDF
Attack MEDIUM
Yuting Tan, Yi Huang, Zhuo Li
Backdoor attacks on large language models (LLMs) typically couple a secret trigger to an explicit malicious output. We show that this explicit...
5 months ago cs.LG cs.CR
PDF
Attack HIGH
Hasini Jayathilaka
Prompt injection attacks are an emerging threat to large language models (LLMs), enabling malicious users to manipulate outputs through carefully...
Attack HIGH
Rui Wang, Zeming Wei, Xiyue Zhang +1 more
Deep Neural Networks (DNNs) are known to be vulnerable to various adversarial perturbations. To address the safety concerns arising from these...
5 months ago cs.LG cs.AI cs.CR
PDF
Attack HIGH
Gil Goren, Shahar Katz, Lior Wolf
Large Language Models (LLMs) are vulnerable to adversarial attacks that bypass safety guidelines and generate harmful content. Mitigating these...
Attack MEDIUM
Sajad U P
Phishing and related cyber threats are becoming more varied and technologically advanced. Among these, email-based phishing remains the most dominant...
5 months ago cs.CR cs.AI cs.LG
PDF
Attack MEDIUM
Shaowei Guan, Yu Zhai, Zhengyu Zhang +2 more
Large Language Models (LLMs) are increasingly vulnerable to adversarial attacks that can subtly manipulate their outputs. While various defense...
5 months ago cs.CR cs.AI
PDF
Attack HIGH
Hao Li, Jiajun He, Guangshuo Wang +3 more
Retrieval-Augmented Generation (RAG) enhances large language models by integrating external knowledge, but reliance on proprietary or sensitive...
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial