AI Security Research

2,529+ academic papers on AI security, attacks, and defenses

Total
2,529
Attack
969
Benchmark
729
Defense
345
Tool
272
Survey
142

Showing 2441–2460 of 2,529 papers

Attack HIGH

Fingerprinting LLMs via Prompt Injection

Yuepeng Hu, Zhengyuan Jiang, Mengyuan Li +4 more

Large language models (LLMs) are often modified after release through post-processing such as post-training or quantization, which makes it...

7 months ago cs.CR cs.CL PDF
Attack LOW

Incentive-Aligned Multi-Source LLM Summaries

Yanchen Jiang, Zhe Feng, Aranyak Mehta

Large language models (LLMs) are increasingly used in modern search and answer systems to synthesize multiple, sometimes conflicting, texts into a...

7 months ago cs.CL cs.AI cs.GT PDF

Track AI security vulnerabilities in real time

Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act), and CISO risk assessments for your AI/ML stack.

Start 14-Day Free Trial