Attack MEDIUM
Atharv Singh Patlan, Peiyao Sheng, S. Ashwin Hebbar +2 more
Language agents are rapidly expanding from single-user assistants to multi-user collaborators in shared workspaces and groups. However, today's...
5 months ago cs.CR cs.AI cs.CL
PDF
Attack MEDIUM
Tom Perel
The recent boom and rapid integration of Large Language Models (LLMs) into a wide range of applications warrants a deeper understanding of their...
5 months ago cs.CR cs.AI
PDF
Attack MEDIUM
Huseein Jawad, Nicolas Brunel
System prompts are critical for guiding the behavior of Large Language Models (LLMs), yet they often contain proprietary logic or sensitive...
5 months ago cs.CR cs.CL
PDF
Attack MEDIUM
Fuyao Zhang, Jiaming Zhang, Che Wang +6 more
The reliance of mobile GUI agents on Multimodal Large Language Models (MLLMs) introduces a severe privacy vulnerability: screenshots containing...
Attack MEDIUM
Ayush Chaudhary, Sisir Doppalpudi
The deployment of robust malware detection systems in big data environments requires careful consideration of both security effectiveness and...
5 months ago cs.CR cs.LG
PDF
Attack MEDIUM
Thomas Rivasseau
Current Large Language Model alignment research mostly focuses on improving model robustness against adversarial attacks and misbehavior by training...
5 months ago cs.CL cs.CR
PDF
Attack MEDIUM
Onkar Shelar, Travis Desell
Large Language Models remain vulnerable to adversarial prompts that elicit toxic content even after safety alignment. We present ToxSearch, a...
5 months ago cs.NE cs.AI cs.CL
PDF
Attack MEDIUM
Yuting Tan, Yi Huang, Zhuo Li
Backdoor attacks on large language models (LLMs) typically couple a secret trigger to an explicit malicious output. We show that this explicit...
5 months ago cs.LG cs.CR
PDF
Attack MEDIUM
Sajad U P
Phishing and related cyber threats are becoming more varied and technologically advanced. Among these, email-based phishing remains the most dominant...
5 months ago cs.CR cs.AI cs.LG
PDF
Attack MEDIUM
Shaowei Guan, Yu Zhai, Zhengyu Zhang +2 more
Large Language Models (LLMs) are increasingly vulnerable to adversarial attacks that can subtly manipulate their outputs. While various defense...
5 months ago cs.CR cs.AI
PDF
Attack MEDIUM
Lucas Fenaux, Christopher Srinivasa, Florian Kerschbaum
Transparency and security are both central to Responsible AI, but they may conflict in adversarial settings. We investigate the strategic effect of...
5 months ago cs.LG cs.CR cs.GT
PDF
Attack MEDIUM
Farhad Abtahi, Fernando Seoane, Iván Pau +1 more
Healthcare AI systems face major vulnerabilities to data poisoning that current defenses and regulations cannot adequately address. We analyzed eight...
5 months ago cs.CR cs.AI
PDF
Attack MEDIUM
Zixun Xiong, Gaoyi Wu, Qingyang Yu +5 more
Given the high cost of large language model (LLM) training from scratch, safeguarding LLM intellectual property (IP) has become increasingly crucial....
6 months ago cs.CR cs.AI
PDF
Attack MEDIUM
Giorgio Piras, Raffaele Mura, Fabio Brau +3 more
Refusal refers to the functional behavior enabling safety-aligned language models to reject harmful or unethical prompts. Following the growing...
6 months ago cs.AI cs.LG
PDF
Attack MEDIUM
Hanlin Cai, Houtianfu Wang, Haofan Dong +3 more
Internet of Agents (IoA) envisions a unified, agent-centric paradigm where heterogeneous large language model (LLM) agents can interconnect and...
6 months ago cs.NI cs.CL
PDF
Attack MEDIUM
Zhisheng Zhang, Derui Wang, Yifan Mi +6 more
Recent advancements in speech synthesis technology have enriched our daily lives, with high-quality and human-like audio widely adopted across...
6 months ago cs.SD cs.AI cs.CR
PDF
Attack MEDIUM
Yuanheng Li, Zhuoyang Chen, Xiaoyun Liu +5 more
As large language models (LLMs) become increasingly capable, concerns over the unauthorized use of copyrighted and licensed content in their training...
Attack MEDIUM
Dilli Prasad Sharma, Liang Xue, Xiaowei Sun +2 more
The rapid proliferation of Internet of Things (IoT) devices has transformed numerous industries by enabling seamless connectivity and data-driven...
6 months ago cs.CR cs.AI cs.CL
PDF
Attack MEDIUM
Viet Nguyen, Vishal M. Patel
Recent advancements in large-scale generative models have enabled the creation of high-quality images and videos, but have also raised significant...
6 months ago cs.CV cs.AI cs.CR
PDF
Attack MEDIUM
Raunak Somani, Aswani Kumar Cherukuri
This paper studies the integration off Large Language Models into cybersecurity tools and protocols. The main issue discussed in this paper is how...
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial