AI Security Research

2,077+ academic papers on AI security, attacks, and defenses

Total
2,077
Attack
809
Benchmark
603
Defense
272
Tool
226
Survey
113

Showing 101–120 of 986 papers

Clear filters
Attack MEDIUM

Good-Enough LLM Obfuscation (GELO)

Anatoly Belikov, Ilya Fedotov

Large Language Models (LLMs) are increasingly served on shared accelerators where an adversary with read access to device memory can observe KV...

2 weeks ago cs.CR cs.LG PDF

Track AI security vulnerabilities in real time

Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act), and CISO risk assessments for your AI/ML stack.

Start 14-Day Free Trial