Defense HIGH
Ayush Garg, Sophia Hager, Jacob Montiel +5 more
Security teams face a challenge: the volume of newly disclosed Common Vulnerabilities and Exposures (CVEs) far exceeds the capacity to manually...
1 months ago cs.CR cs.AI cs.CL
PDF
Attack HIGH
Zikai Zhang, Rui Hu, Olivera Kotevska +1 more
Large Language Models (LLMs) are powerful tools for answering user queries, yet they remain highly vulnerable to jailbreak attacks. Existing...
1 months ago cs.CR cs.AI
PDF
Attack HIGH
Tiankai Yang, Jiate Li, Yi Nian +5 more
LLM-based agents increasingly operate across repeated sessions, maintaining task states to ensure continuity. In many deployments, a single agent...
1 months ago cs.CL cs.AI cs.CR
PDF
Attack HIGH
Yanting Wang, Wei Zou, Runpeng Geng +1 more
Large language models (LLMs) and their applications, such as agents, are highly vulnerable to prompt injection attacks. State-of-the-art prompt...
Tool HIGH
Anubhab Sahu, Diptisha Samanta, Reza Soosahabi
System Instructions in Large Language Models (LLMs) are commonly used to enforce safety policies, define agent behavior, and protect sensitive...
1 months ago cs.CR cs.AI
PDF
Tool HIGH
Jingning Xu, Haochen Luo, Chen Liu
Vision-language models (VLMs) are vulnerable to adversarial image perturbations. Existing works based on adversarial training against task-specific...
1 months ago cs.CV cs.MM
PDF
Attack HIGH
Jiaqing Li, Zhibo Zhang, Shide Zhou +3 more
Model merging has emerged as a powerful technique for combining specialized capabilities from multiple fine-tuned LLMs without additional training...
Tool HIGH
Aengus Lynch
Autonomous AI agents are being deployed with filesystem access, email control, and multi-step planning. This thesis contributes to four open problems...
1 months ago cs.LG cs.AI
PDF
Tool HIGH
Chong Xiang, Drew Zagieboylo, Shaona Ghosh +5 more
AI agents, predominantly powered by large language models (LLMs), are vulnerable to indirect prompt injection, in which malicious instructions...
1 months ago cs.CR cs.AI
PDF
Attack HIGH
Meiwen Ding, Song Xia, Chenqi Kong +1 more
Although multimodal large language models (MLLMs) are increasingly deployed in real-world applications, their instruction-following behavior leaves...
1 months ago cs.CV cs.AI
PDF
Attack HIGH
Kavindu Herath, Joshua Zhao, Saurabh Bagchi
Backdoor attacks on federated learning (FL) are most often evaluated with synthetic corner patches or out-of-distribution (OOD) patterns that are...
1 months ago cs.CR cs.AI cs.CV
PDF
Defense HIGH
Miles Farmer, Ekincan Ufuktepe, Anne Watson +4 more
Large Language Models (LLMs) have emerged as a popular choice in vulnerability detection studies given their foundational capabilities, open source...
1 months ago cs.SE cs.AI cs.CR
PDF
Attack HIGH
Yunrui Yu, Xuxiang Feng, Pengda Qin +5 more
Adversarial robustness evaluation faces a critical challenge as new defense paradigms emerge that can exploit limitations in existing assessment...
1 months ago cs.LG cs.CR
PDF
Tool HIGH
KrishnaSaiReddy Patil
LLM-based chatbots in government services face critical security gaps. Multi-turn adversarial attacks achieve over 90% success against current...
1 months ago cs.CR cs.AI
PDF
Attack HIGH
Bilgehan Sel, Xuanli He, Alwin Peng +2 more
Fine-tuning APIs offered by major AI providers create new attack surfaces where adversaries can bypass safety measures through targeted fine-tuning....
1 months ago cs.CR cs.AI cs.CL
PDF
Attack HIGH
Chihan Huang, Huaijin Wang, Shuai Wang
The pervasive deployment of deep learning models across critical domains has concurrently intensified privacy concerns due to their inherent...
1 months ago cs.LG cs.CR
PDF
Attack HIGH
Chengyin Hu, Jiaju Han, Xuemeng Sun +6 more
Vision-language models (VLMs) rely on a shared visual-textual representation space to perform tasks such as zero-shot classification, image...
Defense HIGH
Aymen Lassoued, Nacef Mbarek, Bechir Dardouri +3 more
Vulnerability detection in C programs is a critical challenge in software security. Although large language models (LLMs) achieve strong detection...
Tool HIGH
Tran Duong Minh Dai, Triet Huynh Minh Le, M. Ali Babar +2 more
Although Graph Neural Networks (GNNs) have shown promise for smart contract vulnerability detection, they still face significant limitations....
1 months ago cs.LG cs.CR
PDF
Attack HIGH
Haochuan Kevin Wang
We present a stage-decomposed analysis of prompt injection attacks against five frontier LLM agents. Prior work measures task-level attack success...
1 months ago cs.CR cs.AI cs.LG
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial