AI Security Research

2,077+ academic papers on AI security, attacks, and defenses

Total
2,077
Attack
809
Benchmark
603
Defense
272
Tool
226
Survey
113

Showing 21–40 of 179 papers

Clear filters
Defense MEDIUM

Fail-Closed Alignment for Large Language Models

Zachary Coalson, Beth Sohler, Aiden Gabriel +1 more

We identify a structural weakness in current large language model (LLM) alignment: modern refusal mechanisms are fail-open. While existing approaches...

1 months ago cs.LG cs.CR PDF
Defense MEDIUM

NeST: Neuron Selective Tuning for LLM Safety

Sasha Behrouzi, Lichao Wu, Mohamadreza Rostami +1 more

Safety alignment is essential for the responsible deployment of large language models (LLMs). Yet, existing approaches often rely on heavyweight...

1 months ago cs.CR cs.LG PDF
Defense MEDIUM

Weight space Detection of Backdoors in LoRA Adapters

David Puertolas Merenciano, Ekaterina Vasyagina, Raghav Dixit +4 more

LoRA adapters let users fine-tune large language models (LLMs) efficiently. However, LoRA adapters are shared through open repositories like Hugging...

1 months ago cs.CR cs.AI cs.CL PDF
Defense MEDIUM

GPTZero: Robust Detection of LLM-Generated Texts

George Alexandru Adam, Alexander Cui, Edwin Thomas +7 more

While historical considerations surrounding text authenticity revolved primarily around plagiarism, the advent of large language models (LLMs) has...

1 months ago cs.LG PDF
Defense MEDIUM

Future Mining: Learning for Safety and Security

Md Sazedur Rahman, Mizanur Rahman Jewel, Sanjay Madria

Mining is rapidly evolving into an AI driven cyber physical ecosystem where safety and operational reliability depend on robust perception,...

1 months ago cs.CR cs.DC PDF

Track AI security vulnerabilities in real time

Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act), and CISO risk assessments for your AI/ML stack.

Start 14-Day Free Trial