AI Security Research
2,529+ academic papers on AI security, attacks, and defenses
Attack HIGH
Qingchao Shen, Zibo Xiao, Lili Huang +3 more
Large Language Models (LLMs) are increasingly deployed across diverse domains, yet their vulnerability to jailbreak attacks, where adversarial inputs...
4 weeks ago cs.CR cs.AI cs.SE
PDF
Attack MEDIUM
Hongru Song, Yu-An Liu, Ruqing Zhang +4 more
Retrieval-augmented generation (RAG) enhances large language model (LLM) reasoning by retrieving external documents, but also opens up new attack...
Attack HIGH
Dominik Blain
We present COBALT-TLA, a neuro-symbolic verification loop that pairs an LLM with TLC, the TLA+ model checker, in an automated REPL. The LLM generates...
4 weeks ago cs.CR cs.LO
PDF
Attack MEDIUM
Anes Abdennebi, Nadjia Kara, Laaziz Lahlou
The applications of Generative Artificial Intelligence (GenAI) and their intersections with data-driven fields, such as healthcare, finance,...
4 weeks ago cs.CR cs.AI
PDF
Attack HIGH
Gamze Kirman Tokgoz, Onat Gungor, Tajana Rosing +1 more
Time-series forecasting aims to predict future values by modeling temporal dependencies in historical observations. It is a critical component of...
4 weeks ago cs.LG cs.CR
PDF
Attack LOW
Zhixiang Lu, Jionglong Su
Multimodal Large Language Models (MLLMs) in healthcare suffer from severe confirmation bias, often hallucinating visual details to support initial,...
Attack HIGH
Navid Azimi, Aditya Prakash, Yao Wang +1 more
Deep neural networks remain highly vulnerable to adversarial perturbations, limiting their reliability in security- and safety-critical applications....
4 weeks ago cs.CR cs.AI cs.CV
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial