AI Security Research
2,529+ academic papers on AI security, attacks, and defenses
Tool MEDIUM
Yuan Fang, Yiming Luo, Aimin Zhou +1 more
Ensuring the safety of large language models (LLMs) requires robust red teaming, yet the systematic synthesis of high-quality toxic data remains...
3 weeks ago cs.CL cs.AI
PDF
Tool LOW
Shawn, Zhong, Junxuan Liao +4 more
AI coding agents operate directly on users' filesystems, where they regularly corrupt data, delete files, and leak secrets. Current approaches force...
Tool LOW
Syed Md Mukit Rashid, Abdullah Al Ishtiaq, Kai Tu +7 more
Logical vulnerabilities in software stem from flaws in program logic rather than memory safety, which can lead to critical security failures....
4 weeks ago cs.CR cs.AI
PDF
Tool MEDIUM
Shangkun Che, Silin Du, Ge Gao
The widespread use of Large Language Models (LLMs) in text generation has raised increasing concerns about intellectual property disputes....
4 weeks ago cs.CR cs.CL
PDF
Tool HIGH
Wei Zhao, Zhe Li, Peixin Zhang +1 more
Tool-augmented Large Language Model (LLM) agents have demonstrated impressive capabilities in automating complex, multi-step real-world tasks, yet...
4 weeks ago cs.CR cs.AI
PDF
Tool HIGH
Yihao Zhang, Kai Wang, Jiangrong Wu +7 more
Large Language Models (LLMs) face prominent security risks from jailbreaking, a practice that manipulates models to bypass built-in security...
4 weeks ago cs.CR cs.AI cs.CL
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial