Attack HIGH
Mohammad Zare, Pirooz Shamsinejadbabaki
Membership inference attacks (MIAs) pose a serious threat to the privacy of machine learning models by allowing adversaries to determine whether a...
3 months ago cs.CR cs.AI cs.LG
PDF
Attack MEDIUM
Jiankai Jin, Xiangzheng Zhang, Zhao Liu +2 more
Machine learning systems can produce personalized outputs that allow an adversary to infer sensitive input attributes at inference time. We introduce...
3 months ago cs.LG cs.AI cs.CR
PDF
Survey LOW
Hugo Silva, Mateus Mendes, Hugo Gonçalo Oliveira
Large language models (LLMs) are evolving fast and are now frequently used as evaluators, in a process typically referred to as LLM-as-a-Judge, which...
3 months ago cs.CL cs.AI
PDF
Attack HIGH
David Condrey
Recent proposals advocate using keystroke timing signals, specifically the coefficient of variation ($δ$) of inter-keystroke intervals, to...
3 months ago cs.CR cs.AI cs.HC
PDF
Benchmark LOW
Massimiliano Pronesti, Anya Belz, Yufang Hou
Recent work on reinforcement learning with verifiable rewards (RLVR) has shown that large language models (LLMs) can be substantially improved using...
3 months ago cs.CL cs.AI
PDF
Tool MEDIUM
Inderjeet Singh, Eleonore Vissol-Gaudin, Andikan Otung +1 more
Fine-tuning Large Language Models (LLMs) for specialized domains is constrained by a fundamental challenge: the need for diverse,...
3 months ago cs.LG cs.AI cs.CR
PDF
Attack MEDIUM
Andy Zhu, Rongzhe Wei, Yupu Gu +1 more
Machine unlearning (MU) for large language models has become critical for AI safety, yet existing methods fail to generalize to Mixture-of-Experts...
3 months ago cs.LG cs.AI
PDF
Attack HIGH
Xing Su, Hao Wu, Hanzhong Liang +4 more
Blockchain systems are increasingly targeted by on-chain attacks that exploit contract vulnerabilities to extract value rapidly and stealthily,...
3 months ago cs.CR cs.SE
PDF
Benchmark MEDIUM
Dongshen Peng, Yi Wang, Austin Schoeffler +2 more
Large language models (LLMs) show promise in clinical decision support yet risk acquiescing to patient pressure for inappropriate care. We introduce...
3 months ago cs.AI cs.HC
PDF
Defense MEDIUM
Xianya Fang, Xianying Luo, Yadong Wang +8 more
Despite the intrinsic risk-awareness of Large Language Models (LLMs), current defenses often result in shallow safety alignment, rendering models...
3 months ago cs.CR cs.AI
PDF
Defense LOW
Zhining Liu, Tianyi Wang, Xiao Lin +9 more
Despite substantial efforts toward improving the moral alignment of Vision-Language Models (VLMs), it remains unclear whether their ethical judgments...
3 months ago cs.CY cs.AI cs.CL
PDF
Attack HIGH
Jivnesh Sandhan, Fei Cheng, Tushar Sandhan +1 more
Large Language Models (LLMs) are increasingly deployed in domains such as education, mental health and customer support, where stable and consistent...
Tool MEDIUM
Wenbo Guo, Shiwen Song, Jiaxun Guo +5 more
Open-source ecosystems such as NPM and PyPI are increasingly targeted by supply chain attacks, yet existing detection methods either depend on...
3 months ago cs.SE cs.CR
PDF
Benchmark MEDIUM
Khoa Nguyen, Khiem Ton, NhatHai Phan +6 more
Although boosting software development performance, large language model (LLM)-powered code generation introduces intellectual property and data...
3 months ago cs.CR cs.AI
PDF
Benchmark MEDIUM
Andres Karjus, Kais Allkivi, Silvia Maine +3 more
Large language models (LLMs) enable rapid and consistent automated evaluation of open-ended exam responses, including dimensions of content and...
3 months ago cs.CL cs.AI
PDF
Attack MEDIUM
Song Xia, Meiwen Ding, Chenqi Kong +2 more
Multimodal large language models (MLLMs) exhibit strong capabilities across diverse applications, yet remain vulnerable to adversarial perturbations...
3 months ago cs.LG cs.CV
PDF
Other LOW
Xiaoya Zheng, Geng Sun, Jiahui Li +5 more
The low-altitude economy (LAE) is an emerging economic paradigm which fosters integrated development across multiple fields. As a pivotal component...
Attack HIGH
Fengheng Chu, Jiahao Chen, Yuhong Wang +4 more
While Large Language Models (LLMs) are aligned to mitigate risks, their safety guardrails remain fragile against jailbreak attacks. This reveals...
3 months ago cs.LG cs.CR
PDF
Benchmark MEDIUM
Akriti Vij, Benjamin Chua, Darshini Ramiah +43 more
As frontier AI models are deployed globally, it is essential that their behaviour remains safe and reliable across diverse linguistic and cultural...
Attack HIGH
Mingyu Yu, Lana Liu, Zhehao Zhao +2 more
The rapid advancement of Multimodal Large Language Models (MLLMs) has introduced complex security challenges, particularly at the intersection of...
3 months ago cs.CV cs.AI
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial