AI Security Research
2,077+ academic papers on AI security, attacks, and defenses
Benchmark MEDIUM
Haochen Gong, Chenxiao Li, Rui Chang +1 more
Large language model (LLM)-based computer-use agents represent a convergence of AI and OS capabilities, enabling natural language to control system-...
6 months ago cs.CR cs.AI cs.OS
PDF
Benchmark MEDIUM
Jiayu Ding, Xinpeng Liu, Zhiyi Pan +2 more
Lifting 2D open-vocabulary understanding into 3D Gaussian Splatting (3DGS) scenes is a critical challenge. However, mainstream methods suffer from...
6 months ago cs.CV cs.AI
PDF
Tool MEDIUM
Lukas Twist, Jie M. Zhang, Mark Harman +1 more
Large language models (LLMs) are increasingly used to generate code, yet they continue to hallucinate, often inventing non-existent libraries. Such...
6 months ago cs.SE cs.CL
PDF
Tool HIGH
Petar Radanliev
This study presents a structured approach to evaluating vulnerabilities within quantum cryptographic protocols, focusing on the BB84 quantum key...
6 months ago cs.CR cs.AI cs.NI
PDF
Attack MEDIUM
David Benfield, Stefano Coniglio, Phan Tu Vuong +1 more
Adversarial machine learning concerns situations in which learners face attacks from active adversaries. Such scenarios arise in applications such as...
6 months ago cs.LG cs.CR
PDF
Defense MEDIUM
Anton Korznikov, Andrey Galichin, Alexey Dontsov +3 more
Activation steering is a promising technique for controlling LLM behavior by adding semantically meaningful vectors directly into a model's hidden...
6 months ago cs.LG cs.AI
PDF
Attack HIGH
Aravindhan G, Yuvaraj Govindarajulu, Parin Shah
Recent studies have demonstrated the vulnerability of Automatic Speech Recognition systems to adversarial examples, which can deceive these systems...
6 months ago cs.SD cs.AI cs.CR
PDF
Attack HIGH
Yue Liu, Yanjie Zhao, Yunbo Lyu +3 more
Agentic AI coding editors driven by large language models have recently become more popular due to their ability to improve developer productivity...
6 months ago cs.CR cs.SE
PDF
Attack HIGH
Taeyoung Yun, Pierre-Luc St-Charles, Jinkyoo Park +2 more
We address the challenge of generating diverse attack prompts for large language models (LLMs) that elicit harmful behaviors (e.g., insults, sexual...
6 months ago cs.LG cs.AI
PDF
Tool MEDIUM
Bochuan Cao, Changjiang Li, Yuanpu Cao +3 more
Large language models (LLMs) have been widely adopted across various applications, leveraging customized system prompts for diverse tasks. Facing...
6 months ago cs.CR cs.AI cs.CL
PDF
Defense MEDIUM
Jaehan Kim, Minkyoo Song, Seungwon Shin +1 more
Recent large language models (LLMs) have increasingly adopted the Mixture-of-Experts (MoE) architecture for efficiency. MoE-based LLMs heavily depend...
6 months ago cs.CR cs.AI
PDF
Attack HIGH
Jingkai Guo, Chaitali Chakrabarti, Deliang Fan
Model integrity of Large language models (LLMs) has become a pressing security concern with their massive online deployment. Prior Bit-Flip Attacks...
6 months ago cs.CR cs.CL cs.LG
PDF
Tool MEDIUM
Daiki Chiba, Hiroki Nakano, Takashi Koide
Phishing attacks are a significant societal threat, disproportionately harming vulnerable populations and eroding trust in essential digital...
Attack MEDIUM
Miao Yu, Zhenhong Zhou, Moayad Aloqaily +5 more
Fine-tuned Large Language Models (LLMs) are vulnerable to backdoor attacks through data poisoning, yet the internal mechanisms governing these...
6 months ago cs.CR cs.AI
PDF
Tool MEDIUM
Prakhar Sharma, Haohuang Wen, Vinod Yegneswaran +3 more
The evolution toward 6G networks is being accelerated by the Open Radio Access Network (O-RAN) paradigm -- an open, interoperable architecture that...
6 months ago cs.CR cs.AI cs.LG
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial