Attack HIGH
Masahiro Kaneko, Timothy Baldwin
Adversarial attacks by malicious users that threaten the safety of large language models (LLMs) can be viewed as attempts to infer a target property...
5 months ago cs.CR cs.CL cs.LG
PDF
Attack HIGH
Mansi Phute, Matthew Hull, Haoran Wang +6 more
Deep learning models deployed in safety critical applications like autonomous driving use simulations to test their robustness against adversarial...
5 months ago cs.CR cs.AI cs.LG
PDF
Attack HIGH
Amirkia Rafiei Oskooei, Mehmet S. Aktas
The proficiency of Large Language Models (LLMs) in processing structured data and adhering to syntactic rules is a capability that drives their...
5 months ago cs.CR cs.AI cs.CL
PDF
Attack HIGH
Alireza Heshmati, Saman Soleimani Roudi, Sajjad Amini +2 more
Existing adversarial attacks often neglect perturbation sparsity, limiting their ability to model structural changes and to explain how deep neural...
5 months ago cs.CR cs.LG eess.IV
PDF
Attack HIGH
Dimitris Stefanopoulos, Andreas Voskou
This report presents the winning solution for Task 1 of Colliding with Adversaries: A Challenge on Robust Learning in High Energy Physics Discovery...
5 months ago cs.LG cs.CR
PDF
Attack HIGH
Owais Makroo, Siva Rajesh Kasa, Sumegh Roychowdhury +4 more
Membership Inference Attacks (MIAs) pose a critical privacy threat by enabling adversaries to determine whether a specific sample was included in a...
5 months ago cs.CR cs.CL cs.LG
PDF
Attack HIGH
Shuang Liang, Zhihao Xu, Jialing Tao +2 more
Despite extensive alignment efforts, Large Vision-Language Models (LVLMs) remain vulnerable to jailbreak attacks, posing serious safety risks. To...
5 months ago cs.CV cs.AI
PDF
Attack HIGH
Deyue Zhang, Dongdong Yang, Junjie Mu +6 more
Multimodal large language models (MLLMs) exhibit remarkable capabilities but remain susceptible to jailbreak attacks exploiting cross-modal...
5 months ago cs.CR cs.AI
PDF
Attack HIGH
Evangelos Lamprou, Julian Dai, Grigoris Ntousakis +2 more
Software supply-chain attacks are an important and ongoing concern in the open source software ecosystem. These attacks maintain the standard...
Attack HIGH
Xiaoyu Xue, Yuni Lai, Chenxi Huang +4 more
The emergence of graph foundation models (GFMs), particularly those incorporating language models (LMs), has revolutionized graph learning and...
5 months ago cs.CR cs.AI
PDF
Attack HIGH
Yingguang Yang, Xianghua Zeng, Qi Wu +5 more
Social networks have become a crucial source of real-time information for individuals. The influence of social bots within these platforms has...
5 months ago cs.LG cs.AI cs.CR
PDF
Attack HIGH
Abdulrahman Alhaidari, Balaji Palanisamy, Prashant Krishnamurthy
Billions of dollars are lost every year in DeFi platforms by transactions exploiting business logic or accounting vulnerabilities. Existing defenses...
5 months ago cs.CR cs.AI cs.DC
PDF
Attack HIGH
Wei Zou, Yupei Liu, Yanting Wang +3 more
LLM-integrated applications are vulnerable to prompt injection attacks, where an attacker contaminates the input to inject malicious instructions,...
5 months ago cs.CR cs.LG
PDF
Attack HIGH
Avihay Cohen
Large Language Model (LLM) based agents integrated into web browsers (often called agentic AI browsers) offer powerful automation of web tasks....
5 months ago cs.CR cs.AI
PDF
Attack HIGH
Baogang Song, Dongdong Zhao, Jianwen Xiang +2 more
Backdoor attacks pose a persistent security risk to deep neural networks (DNNs) due to their stealth and durability. While recent research has...
5 months ago cs.CR cs.AI
PDF
Attack HIGH
Tuan T. Nguyen, John Le, Thai T. Vu +2 more
Large language models (LLMs) achieve impressive performance across diverse tasks yet remain vulnerable to jailbreak attacks that bypass safety...
Attack HIGH
Yuqi Jia, Yupei Liu, Zedian Shao +2 more
Prompt injection attacks deceive a large language model into completing an attacker-specified task instead of its intended task by contaminating its...
5 months ago cs.CR cs.AI
PDF
Attack HIGH
Bowen Fan, Zhilin Guo, Xunkai Li +5 more
Graph Neural Networks (GNNs) have become a pivotal framework for modeling graph-structured data, enabling a wide range of applications from social...
Attack HIGH
Xiaoxue Ren, Penghao Jiang, Kaixin Li +6 more
Web applications are prime targets for cyberattacks as gateways to critical services and sensitive data. Traditional penetration testing is costly...
5 months ago cs.CR cs.CL
PDF
Attack HIGH
Harsh Kasyap, Minghong Fang, Zhuqing Liu +2 more
Federated learning (FL) is a privacy-preserving machine learning technique that facilitates collaboration among participants across demographics. FL...
5 months ago cs.LG cs.CR
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial