AI Security Research
2,529+ academic papers on AI security, attacks, and defenses
Tool HIGH
Tim Van hamme, Thomas Vissers, Javier Carnerero-Cano +4 more
LLMs are increasingly deployed as autonomous agents with access to tools, databases, and external services, yet practitioners (across different...
Yesterday cs.AI cs.CR
PDF
Tool HIGH
Zhaorun Chen, Xun Liu, Haibo Tong +14 more
AI agents are increasingly deployed across diverse domains to automate complex workflows through long-horizon and high-stakes action executions. Due...
Tool HIGH
Haoyu Zhang, Mohammad Zandsalimy, Shanu Sushmita
Large language models (LLMs) employ safety mechanisms to prevent harmful outputs, yet these defenses primarily rely on semantic pattern matching. We...
1 weeks ago cs.CR cs.AI cs.CL
PDF
Tool HIGH
Weiyi Kong, Ahmad Mohammad Saber, Amr Youssef +1 more
In modern energy systems, industrial control systems (ICS) and power-system SCADA require intrusion detection that is not only accurate but also...
Tool HIGH
Yuchuan Zhao, Tong Chen, Junliang Yu +3 more
Large language model-powered sequential recommender systems (LLM-SRSs) have recently demonstrated remarkable performance, enabling recommendations...
Tool HIGH
Run Hao, Zhuoran Tan
Model Context Protocol (MCP) is increasingly adopted for tool-integrated LLM agents, but its multi-layer design and third-party server ecosystem...
Tool HIGH
Jiamin Chang, Minhui Xue, Ruoxi Sun +3 more
Recent advances in embodied Vision-Language Agentic Systems (VLAS), powered by large vision-language models (LVLMs), enable AI systems to perceive...
3 weeks ago cs.CV cs.AI
PDF
Tool HIGH
Jiacheng Liang, Yao Ma, Tharindu Kumarage +5 more
Reinforcement Learning from Human Feedback (RLHF) is central to aligning Large Language Models (LLMs), yet it introduces a critical vulnerability: an...
3 weeks ago cs.AI cs.CR cs.LG
PDF
Tool HIGH
Wei Zhao, Zhe Li, Peixin Zhang +1 more
Tool-augmented Large Language Model (LLM) agents have demonstrated impressive capabilities in automating complex, multi-step real-world tasks, yet...
4 weeks ago cs.CR cs.AI
PDF
Tool HIGH
Yihao Zhang, Kai Wang, Jiangrong Wu +7 more
Large Language Models (LLMs) face prominent security risks from jailbreaking, a practice that manipulates models to bypass built-in security...
4 weeks ago cs.CR cs.AI cs.CL
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial