AI Security Research

2,529+ academic papers on AI security, attacks, and defenses

Total
2,529
Attack
969
Benchmark
729
Defense
345
Tool
272
Survey
142

Showing 1–20 of 108 papers

Clear filters
Attack MEDIUM

On the Hardness of Junking LLMs

Marco Rando, Samuel Vaiter

Large language models (LLMs) are known to be vulnerable to jailbreak attacks, which typically rely on carefully designed prompts containing explicit...

6 days ago cs.LG PDF

Track AI security vulnerabilities in real time

Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act), and CISO risk assessments for your AI/ML stack.

Start 14-Day Free Trial