AI Security Research
2,529+ academic papers on AI security, attacks, and defenses
Attack HIGH
Fortunatus Aabangbio Wulnye, Justice Owusu Agyemang, Kwame Opuni-Boachie Obour Agyekum +3 more
Ensuring the reliability of machine learning-based intrusion detection systems remains a critical challenge in Internet of Things (IoT) environments,...
3 weeks ago cs.CR cs.AI
PDF
Attack HIGH
Andrii Vakhnovskyi
The United States designates Food and Agriculture as one of sixteen critical infrastructure sectors, yet no mandatory cybersecurity requirements...
4 weeks ago cs.CR eess.SY
PDF
Attack HIGH
Yingying Zhao, Chengyin Hu, Qike Zhang +7 more
Vision-Language Models (VLMs) have shown remarkable performance, yet their security remains insufficiently understood. Existing adversarial studies...
Attack HIGH
Jianhao Chen, Haoyang Chen, Hanjie Zhao +2 more
The rapid evolution of Vision-Language Models (VLMs) has catalyzed unprecedented capabilities in artificial intelligence; however, this continuous...
4 weeks ago cs.AI cs.MM
PDF
Attack HIGH
Junyu Ren, Xingjian Pan, Wensheng Gan +1 more
Prompt injection has emerged as a critical security threat to large language models (LLMs), yet existing studies predominantly focus on...
Attack HIGH
Ravikumar Balakrishnan, Sanket Mendapara, Ankit Garg
We study typographic prompt injection attacks on vision-language models (VLMs), where adversarial text is rendered as images to bypass safety...
Attack HIGH
Yulin Chen, Tri Cao, Haoran Li +7 more
Web agents powered by vision-language models (VLMs) enable autonomous interaction with web environments by perceiving and acting on both visual and...
Attack HIGH
Qingchao Shen, Zibo Xiao, Lili Huang +3 more
Large Language Models (LLMs) are increasingly deployed across diverse domains, yet their vulnerability to jailbreak attacks, where adversarial inputs...
4 weeks ago cs.CR cs.AI cs.SE
PDF
Attack HIGH
Dominik Blain
We present COBALT-TLA, a neuro-symbolic verification loop that pairs an LLM with TLC, the TLA+ model checker, in an automated REPL. The LLM generates...
4 weeks ago cs.CR cs.LO
PDF
Attack HIGH
Gamze Kirman Tokgoz, Onat Gungor, Tajana Rosing +1 more
Time-series forecasting aims to predict future values by modeling temporal dependencies in historical observations. It is a critical component of...
4 weeks ago cs.LG cs.CR
PDF
Tool HIGH
Wei Zhao, Zhe Li, Peixin Zhang +1 more
Tool-augmented Large Language Model (LLM) agents have demonstrated impressive capabilities in automating complex, multi-step real-world tasks, yet...
4 weeks ago cs.CR cs.AI
PDF
Tool HIGH
Yihao Zhang, Kai Wang, Jiangrong Wu +7 more
Large Language Models (LLMs) face prominent security risks from jailbreaking, a practice that manipulates models to bypass built-in security...
4 weeks ago cs.CR cs.AI cs.CL
PDF
Attack HIGH
Navid Azimi, Aditya Prakash, Yao Wang +1 more
Deep neural networks remain highly vulnerable to adversarial perturbations, limiting their reliability in security- and safety-critical applications....
4 weeks ago cs.CR cs.AI cs.CV
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial