AI Security Research
2,529+ academic papers on AI security, attacks, and defenses
Tool MEDIUM
Joel Rorseth, Parke Godfrey, Lukasz Golab +2 more
This paper demonstrates RUBEN, an interactive tool for discovering minimal rules to explain the outputs of retrieval-augmented large language models...
Tool MEDIUM
Michael A. Riegler, Inga Strümke
We present swarm-attack, an open-source adversarial testing framework in which multiple lightweight LLM agents coordinate through shared memory,...
2 days ago cs.CR cs.AI cs.LG
PDF
Tool MEDIUM
Chengjie Wang, Jingzheng Wu, Xiang Ling +2 more
Large language models (LLMs) are now largely involved in software development workflows, and the code they generate routinely includes third-party...
5 days ago cs.SE cs.AI
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial