AI Security Research

2,589+ academic papers on AI security, attacks, and defenses

Total
2,589
Attack
998
Benchmark
740
Defense
355
Tool
276
Survey
147

Showing 1161–1180 of 1,927 papers

Clear filters
Attack HIGH

Jailbreaking LLMs via Calibration

Yuxuan Lu, Yongkang Guo, Yuqing Kong

Safety alignment in Large Language Models (LLMs) often creates a systematic discrepancy between a model's aligned output and the underlying...

3 months ago cs.CL cs.AI cs.CR PDF

Track AI security vulnerabilities in real time

Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act), and CISO risk assessments for your AI/ML stack.

Start 14-Day Free Trial