AI Security Research

2,077+ academic papers on AI security, attacks, and defenses

Total
2,077
Attack
809
Benchmark
603
Defense
272
Tool
226
Survey
113

Showing 81–100 of 522 papers

Clear filters
Attack HIGH

Prompt Injection as Role Confusion

Charles Ye, Jasmine Cui, Dylan Hadfield-Menell

Language models remain vulnerable to prompt injection attacks despite extensive safety training. We trace this failure to role confusion: models...

1 months ago cs.CL cs.AI cs.CR PDF
Attack HIGH

TFL: Targeted Bit-Flip Attack on Large Language Model

Jingkai Guo, Chaitali Chakrabarti, Deliang Fan

Large language models (LLMs) are increasingly deployed in safety and security critical applications, raising concerns about their robustness to model...

1 months ago cs.CR cs.CL cs.LG PDF
Attack HIGH

Sequential Membership Inference Attacks

Thomas Michel, Debabrota Basu, Emilie Kaufmann

Modern AI models are not static. They go through multiple updates in their lifecycles. Thus, exploiting the model dynamics to create stronger...

1 months ago cs.LG cs.CR math.ST PDF
Attack HIGH

Boundary Point Jailbreaking of Black-Box LLMs

Xander Davies, Giorgi Giglemiani, Edmund Lau +3 more

Frontier LLMs are safeguarded against attempts to extract harmful information via adversarial prompts known as "jailbreaks". Recently, defenders have...

1 months ago cs.LG PDF

Track AI security vulnerabilities in real time

Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act), and CISO risk assessments for your AI/ML stack.

Start 14-Day Free Trial