AI Security Research
2,529+ academic papers on AI security, attacks, and defenses
Benchmark HIGH
Haoran Ou, Kangjie Chen, Xingshuo Han +4 more
Large Language Models (LLMs) have been augmented with web search to overcome the limitations of the static knowledge boundary by accessing up-to-date...
7 months ago cs.CR cs.AI
PDF
Benchmark HIGH
Rishika Bhagwatkar, Kevin Kasa, Abhay Puri +5 more
AI agents are vulnerable to indirect prompt injection attacks, where malicious instructions embedded in external content or tool outputs cause...
Benchmark HIGH
Rishika Bhagwatkar, Kevin Kasa, Abhay Puri +5 more
AI agents are vulnerable to indirect prompt injection attacks, where malicious instructions embedded in external content or tool outputs cause...
Benchmark HIGH
Chengquan Guo, Chulin Xie, Yu Yang +6 more
Code agents have gained widespread adoption due to their strong code generation capabilities and integration with code interpreters, enabling dynamic...
Benchmark HIGH
Yinuo Liu, Ruohan Xu, Xilong Wang +2 more
Multiple prompt injection attacks have been proposed against web agents. At the same time, various methods have been developed to detect general...
7 months ago cs.CR cs.AI cs.CL
PDF
Benchmark HIGH
Haoran Xi, Minghao Shao, Brendan Dolan-Gavitt +2 more
Large language models show promise for vulnerability discovery, yet prevailing methods inspect code in isolation, struggle with long contexts, and...
7 months ago cs.SE cs.CR cs.LG
PDF
Benchmark HIGH
Simin Chen, Yixin He, Suman Jana +1 more
LLM-based agents are increasingly deployed for software maintenance tasks such as automated program repair (APR). APR agents automatically fetch...
Benchmark HIGH
Alireza Lotfi, Charalampos Katsis, Elisa Bertino
Software vulnerabilities remain a critical security challenge, providing entry points for attackers into enterprise networks. Despite advances in...
Benchmark HIGH
Jianshuo Dong, Sheng Guo, Hao Wang +6 more
Search agents connect LLMs to the Internet, enabling them to access broader and more up-to-date information. However, this also introduces a new...
7 months ago cs.AI cs.CL cs.CR
PDF
Benchmark HIGH
Wenkai Guo, Xuefeng Liu, Haolin Wang +3 more
Fine-tuning large language models (LLMs) with local data is a widely adopted approach for organizations seeking to adapt LLMs to their specific...
7 months ago cs.LG cs.CL cs.CR
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial