Benchmark MEDIUM
Xiaohui Hu, Wun Yu Chan, Yuejie Shi +5 more
Smart contract security is paramount, but identifying intricate business logic vulnerabilities remains a persistent challenge because existing...
Benchmark HIGH
Zelong Zheng, Jiayuan Zhou, Xing Hu +2 more
Software vulnerability management has become increasingly critical as modern systems scale in size and complexity. However, existing automated...
Benchmark MEDIUM
Alireza Salemi, Hamed Zamani
Personalization is crucial for aligning Large Language Model (LLM) outputs with individual user preferences and background knowledge....
2 months ago cs.CL cs.AI cs.CR
PDF
Tool HIGH
Qi Li, Xinchao Wang
Enabling large language models (LLMs) to solve complex reasoning tasks is a key step toward artificial general intelligence. Recent work augments...
Tool HIGH
Narek Maloyan, Dmitry Namiot
The Model Context Protocol (MCP) has emerged as a de facto standard for integrating Large Language Models with external tools, yet no formal security...
2 months ago cs.CR cs.AI
PDF
Attack HIGH
Narek Maloyan, Dmitry Namiot
The proliferation of agentic AI coding assistants, including Claude Code, GitHub Copilot, Cursor, and emerging skill-based architectures, has...
Benchmark MEDIUM
Marton Szep, Jorge Marin Ruiz, Georgios Kaissis +4 more
Fine-tuning Large Language Models (LLMs) on sensitive datasets carries a substantial risk of unintended memorization and leakage of Personally...
2 months ago cs.LG cs.AI cs.CL
PDF
Attack HIGH
Chen Ling, Kai Hu, Hangcheng Liu +3 more
Large Vision-Language Models (LVLMs) are increasingly deployed in real-world intelligent systems for perception and reasoning in open physical...
2 months ago cs.CV cs.AI
PDF
Attack HIGH
Mohammad Zare, Pirooz Shamsinejadbabaki
Membership inference attacks (MIAs) pose a serious threat to the privacy of machine learning models by allowing adversaries to determine whether a...
2 months ago cs.CR cs.AI cs.LG
PDF
Attack MEDIUM
Jiankai Jin, Xiangzheng Zhang, Zhao Liu +2 more
Machine learning systems can produce personalized outputs that allow an adversary to infer sensitive input attributes at inference time. We introduce...
2 months ago cs.LG cs.AI cs.CR
PDF
Survey LOW
Hugo Silva, Mateus Mendes, Hugo Gonçalo Oliveira
Large language models (LLMs) are evolving fast and are now frequently used as evaluators, in a process typically referred to as LLM-as-a-Judge, which...
2 months ago cs.CL cs.AI
PDF
Attack HIGH
David Condrey
Recent proposals advocate using keystroke timing signals, specifically the coefficient of variation ($δ$) of inter-keystroke intervals, to...
2 months ago cs.CR cs.AI cs.HC
PDF
Benchmark LOW
Massimiliano Pronesti, Anya Belz, Yufang Hou
Recent work on reinforcement learning with verifiable rewards (RLVR) has shown that large language models (LLMs) can be substantially improved using...
2 months ago cs.CL cs.AI
PDF
Tool MEDIUM
Inderjeet Singh, Eleonore Vissol-Gaudin, Andikan Otung +1 more
Fine-tuning Large Language Models (LLMs) for specialized domains is constrained by a fundamental challenge: the need for diverse,...
2 months ago cs.LG cs.AI cs.CR
PDF
Attack MEDIUM
Andy Zhu, Rongzhe Wei, Yupu Gu +1 more
Machine unlearning (MU) for large language models has become critical for AI safety, yet existing methods fail to generalize to Mixture-of-Experts...
2 months ago cs.LG cs.AI
PDF
Attack HIGH
Xing Su, Hao Wu, Hanzhong Liang +4 more
Blockchain systems are increasingly targeted by on-chain attacks that exploit contract vulnerabilities to extract value rapidly and stealthily,...
2 months ago cs.CR cs.SE
PDF
Benchmark MEDIUM
Dongshen Peng, Yi Wang, Austin Schoeffler +2 more
Large language models (LLMs) show promise in clinical decision support yet risk acquiescing to patient pressure for inappropriate care. We introduce...
2 months ago cs.AI cs.HC
PDF
Defense MEDIUM
Xianya Fang, Xianying Luo, Yadong Wang +8 more
Despite the intrinsic risk-awareness of Large Language Models (LLMs), current defenses often result in shallow safety alignment, rendering models...
2 months ago cs.CR cs.AI
PDF
Defense LOW
Zhining Liu, Tianyi Wang, Xiao Lin +9 more
Despite substantial efforts toward improving the moral alignment of Vision-Language Models (VLMs), it remains unclear whether their ethical judgments...
2 months ago cs.CY cs.AI cs.CL
PDF
Attack HIGH
Jivnesh Sandhan, Fei Cheng, Tushar Sandhan +1 more
Large Language Models (LLMs) are increasingly deployed in domains such as education, mental health and customer support, where stable and consistent...
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial