Attack MEDIUM
Sen Nie, Jie Zhang, Zhuo Wang +2 more
Vision-language models (VLMs) such as CLIP have demonstrated remarkable zero-shot generalization, yet remain highly vulnerable to adversarial...
Attack HIGH
Harsh Chaudhari, Ethan Rathbun, Hanna Foerster +5 more
Chain-of-Thought (CoT) reasoning has emerged as a powerful technique for enhancing large language models' capabilities by generating intermediate...
1 months ago cs.CR cs.LG
PDF
Attack HIGH
Gabriel Lee Jun Rong, Christos Korgialas, Dion Jia Xu Ho +3 more
Existing automated attack suites operate as static ensembles with fixed sequences, lacking strategic adaptation and semantic awareness. This paper...
Attack HIGH
Alexandra Chouldechova, A. Feder Cooper, Solon Barocas +3 more
We argue that conclusions drawn about relative system safety or attack method efficacy via AI red teaming are often not supported by evidence...
Attack HIGH
Narek Maloyan, Dmitry Namiot
The proliferation of agentic AI coding assistants, including Claude Code, GitHub Copilot, Cursor, and emerging skill-based architectures, has...
Attack HIGH
Chen Ling, Kai Hu, Hangcheng Liu +3 more
Large Vision-Language Models (LVLMs) are increasingly deployed in real-world intelligent systems for perception and reasoning in open physical...
2 months ago cs.CV cs.AI
PDF
Attack HIGH
Mohammad Zare, Pirooz Shamsinejadbabaki
Membership inference attacks (MIAs) pose a serious threat to the privacy of machine learning models by allowing adversaries to determine whether a...
2 months ago cs.CR cs.AI cs.LG
PDF
Attack MEDIUM
Jiankai Jin, Xiangzheng Zhang, Zhao Liu +2 more
Machine learning systems can produce personalized outputs that allow an adversary to infer sensitive input attributes at inference time. We introduce...
2 months ago cs.LG cs.AI cs.CR
PDF
Attack HIGH
David Condrey
Recent proposals advocate using keystroke timing signals, specifically the coefficient of variation ($δ$) of inter-keystroke intervals, to...
2 months ago cs.CR cs.AI cs.HC
PDF
Attack MEDIUM
Andy Zhu, Rongzhe Wei, Yupu Gu +1 more
Machine unlearning (MU) for large language models has become critical for AI safety, yet existing methods fail to generalize to Mixture-of-Experts...
2 months ago cs.LG cs.AI
PDF
Attack HIGH
Xing Su, Hao Wu, Hanzhong Liang +4 more
Blockchain systems are increasingly targeted by on-chain attacks that exploit contract vulnerabilities to extract value rapidly and stealthily,...
2 months ago cs.CR cs.SE
PDF
Attack HIGH
Jivnesh Sandhan, Fei Cheng, Tushar Sandhan +1 more
Large Language Models (LLMs) are increasingly deployed in domains such as education, mental health and customer support, where stable and consistent...
Attack MEDIUM
Song Xia, Meiwen Ding, Chenqi Kong +2 more
Multimodal large language models (MLLMs) exhibit strong capabilities across diverse applications, yet remain vulnerable to adversarial perturbations...
2 months ago cs.LG cs.CV
PDF
Attack HIGH
Fengheng Chu, Jiahao Chen, Yuhong Wang +4 more
While Large Language Models (LLMs) are aligned to mitigate risks, their safety guardrails remain fragile against jailbreak attacks. This reveals...
2 months ago cs.LG cs.CR
PDF
Attack HIGH
Mingyu Yu, Lana Liu, Zhehao Zhao +2 more
The rapid advancement of Multimodal Large Language Models (MLLMs) has introduced complex security challenges, particularly at the intersection of...
2 months ago cs.CV cs.AI
PDF
Attack HIGH
Md Nabi Newaz Khan, Abdullah Arafat Miah, Yu Bi
Graph neural network (GNN) have demonstrated exceptional performance in solving critical problems across diverse domains yet remain susceptible to...
2 months ago cs.LG cs.AI cs.CR
PDF
Attack HIGH
Sahar Tahmasebi, Eric Müller-Budack, Ralph Ewerth
Misinformation and fake news have become a pressing societal challenge, driving the need for reliable automated detection methods. Prior research has...
Attack HIGH
Piyumi Bhagya Sudasinghe, Kushan Sudheera Kalupahana Liyanage, Harsha S. Gardiyawasam Pussewalage
The rapid growth of Internet of Things (IoT) devices has increased the scale and diversity of cyberattacks, exposing limitations in traditional...
Attack HIGH
Zhihao Chen, Zirui Gong, Jianting Ning +2 more
Federated Rank Learning (FRL) is a promising Federated Learning (FL) paradigm designed to be resilient against model poisoning attacks due to its...
2 months ago cs.LG cs.CR cs.DC
PDF
Attack MEDIUM
Víctor Mayoral-Vilches, Stefan Rass, Martin Pinzger +14 more
Cybersecurity superintelligence -- artificial intelligence exceeding the best human capability in both speed and strategic reasoning -- represents...
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial