AI Security Research

2,560+ academic papers on AI security, attacks, and defenses

Total
2,560
Attack
982
Benchmark
736
Defense
350
Tool
275
Survey
144

Showing 321–340 of 932 papers

Clear filters
Attack MEDIUM

Good-Enough LLM Obfuscation (GELO)

Anatoly Belikov, Ilya Fedotov

Large Language Models (LLMs) are increasingly served on shared accelerators where an adversary with read access to device memory can observe KV...

2 months ago cs.CR cs.LG PDF

Track AI security vulnerabilities in real time

Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act), and CISO risk assessments for your AI/ML stack.

Start 14-Day Free Trial