Attack HIGH
John Hawkins, Aditya Pramar, Rodney Beard +1 more
Large Language Models (LLMs) suffer from a range of vulnerabilities that allow malicious users to solicit undesirable responses through manipulation...
7 months ago cs.CL cs.AI cs.CY
PDF
Attack HIGH
Isha Gupta, Rylan Schaeffer, Joshua Kazdan +2 more
The field of adversarial robustness has long established that adversarial examples can successfully transfer between image classifiers and that text...
7 months ago cs.LG cs.AI
PDF
Tool HIGH
Shoumik Saha, Jifan Chen, Sam Mayers +3 more
Code-capable large language model (LLM) agents are increasingly embedded into software engineering workflows where they can read, write, and execute...
7 months ago cs.CR cs.AI
PDF
Benchmark HIGH
Yinuo Liu, Ruohan Xu, Xilong Wang +2 more
Multiple prompt injection attacks have been proposed against web agents. At the same time, various methods have been developed to detect general...
7 months ago cs.CR cs.AI cs.CL
PDF
Attack HIGH
Xiangfang Li, Yu Wang, Bo Li
With the rapid advancement of large language models (LLMs), ensuring their safe use becomes increasingly critical. Fine-tuning is a widely used...
Attack HIGH
Alexandrine Fortier, Thomas Thebaud, Jesús Villalba +2 more
Large Language Models (LLMs) and their multimodal extensions are becoming increasingly popular. One common approach to enable multimodality is to...
7 months ago cs.CL cs.CR cs.SD
PDF
Defense HIGH
Shojiro Yamabe, Jun Sakuma
Diffusion language models (DLMs) generate tokens in parallel through iterative denoising, which can reduce latency and enable bidirectional...
7 months ago cs.AI cs.LG
PDF
Attack HIGH
Raik Dankworth, Gesina Schwalbe
Deep neural networks (NNs) for computer vision are vulnerable to adversarial attacks, i.e., miniscule malicious changes to inputs may induce...
7 months ago cs.CR cs.LG
PDF
Attack HIGH
Chenxiang Luo, David K. Y. Yau, Qun Song
Federated learning (FL) enables collaborative model training without sharing raw data but is vulnerable to gradient inversion attacks (GIAs), where...
7 months ago cs.CR cs.LG
PDF
Benchmark HIGH
Haoran Xi, Minghao Shao, Brendan Dolan-Gavitt +2 more
Large language models show promise for vulnerability discovery, yet prevailing methods inspect code in isolation, struggle with long contexts, and...
7 months ago cs.SE cs.CR cs.LG
PDF
Attack HIGH
Qinjian Zhao, Jiaqi Wang, Zhiqiang Gao +3 more
Large Language Models (LLMs) have achieved impressive performance across diverse natural language processing tasks, but their growing power also...
Attack HIGH
Xiaobao Wang, Ruoxiao Sun, Yujun Zhang +4 more
Graph Neural Networks (GNNs) have demonstrated strong performance across tasks such as node classification, link prediction, and graph...
7 months ago cs.LG cs.CR
PDF
Benchmark HIGH
Simin Chen, Yixin He, Suman Jana +1 more
LLM-based agents are increasingly deployed for software maintenance tasks such as automated program repair (APR). APR agents automatically fetch...
Attack HIGH
Yein Park, Jungwoo Park, Jaewoo Kang
Large language models (LLMs), despite being safety-aligned, exhibit brittle refusal behaviors that can be circumvented by simple linguistic changes....
Tool HIGH
Jing-Jing Li, Jianfeng He, Chao Shang +6 more
As LLMs advance into autonomous agents with tool-use capabilities, they introduce security challenges that extend beyond traditional content-based...
7 months ago cs.CR cs.AI cs.CL
PDF
Attack HIGH
Yuepeng Hu, Zhengyuan Jiang, Mengyuan Li +4 more
Large language models (LLMs) are often modified after release through post-processing such as post-training or quantization, which makes it...
7 months ago cs.CR cs.CL
PDF
Attack HIGH
Yupei Liu, Yanting Wang, Yuqi Jia +2 more
Prompt injection attacks pose a pervasive threat to the security of Large Language Models (LLMs). State-of-the-art prevention-based defenses...
7 months ago cs.CR cs.AI
PDF
Attack HIGH
Zhifang Zhang, Qiqi Tao, Jiaqi Lv +3 more
Large vision-language models (LVLMs) have achieved impressive performance across a wide range of vision-language tasks, while they remain vulnerable...
Survey HIGH
Weibo Zhao, Jiahao Liu, Bonan Ruan +2 more
Model Context Protocol (MCP) servers enable AI applications to connect to external systems in a plug-and-play manner, but their rapid proliferation...
7 months ago cs.CR cs.SE
PDF
Benchmark HIGH
Alireza Lotfi, Charalampos Katsis, Elisa Bertino
Software vulnerabilities remain a critical security challenge, providing entry points for attackers into enterprise networks. Despite advances in...
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial