Attack HIGH
Mingrui Liu, Sixiao Zhang, Cheng Long +1 more
As Large Language Models (LLMs) become integral to computing infrastructure, safety alignment serves as the primary security control preventing the...
Attack HIGH
Yukun Jiang, Mingjie Li, Michael Backes +1 more
Despite their superior performance on a wide range of domains, large language models (LLMs) remain vulnerable to misuse for generating harmful...
Attack HIGH
Nguyen Linh Bao Nguyen, Alsharif Abuadbba, Kristen Moore +1 more
The rapid advancement of generative models has enabled the creation of increasingly stealthy synthetic voices, commonly referred to as audio...
6 months ago cs.CR cs.LG cs.MM
PDF
Attack HIGH
Zheng-Xin Yong, Stephen H. Bach
We discover a novel and surprising phenomenon of unintentional misalignment in reasoning language models (RLMs), which we call self-jailbreaking....
6 months ago cs.CR cs.CL
PDF
Attack MEDIUM
Soham Hans, Stacy Marsella, Sophia Hirschmann +1 more
Understanding adversarial behavior in cybersecurity has traditionally relied on high-level intelligence reports and manual interpretation of attack...
6 months ago cs.CR cs.AI
PDF
Attack MEDIUM
Austin Jia, Avaneesh Ramesh, Zain Shamsi +2 more
Retrieval-Augmented Generation (RAG) has emerged as the dominant architectural pattern to operationalize Large Language Model (LLM) usage in Cyber...
6 months ago cs.CR cs.AI cs.IR
PDF
Attack LOW
Antônio H. Ribeiro, David Vävinggren, Dave Zachariah +2 more
Adversarial training has emerged as a key technique to enhance model robustness against adversarial input perturbations. Many of the existing methods...
6 months ago stat.ML cs.CR cs.LG
PDF
Attack HIGH
Wei Shao, Yuhao Wang, Rongguang He +2 more
Existing defence mechanisms have demonstrated significant effectiveness in mitigating rule-based Denial-of-Service (DoS) attacks, leveraging...
6 months ago cs.CR cs.AI
PDF
Attack MEDIUM
Daniel Gilkarov, Ran Dubin
Pretrained deep learning model sharing holds tremendous value for researchers and enterprises alike. It allows them to apply deep learning by...
Attack HIGH
Chiyu Chen, Xinhao Song, Yunkai Chai +7 more
Vision-Language Models (VLMs) are increasingly deployed as autonomous agents to navigate mobile graphical user interfaces (GUIs). Operating in...
6 months ago cs.CR cs.AI
PDF
Attack HIGH
Divyanshu Kumar, Shreyas Jena, Nitin Aravind Birur +3 more
Multimodal large language models (MLLMs) have achieved remarkable progress, yet remain critically vulnerable to adversarial attacks that exploit...
6 months ago cs.CR cs.MM
PDF
Attack MEDIUM
Tushar Nayan, Ziqi Zhang, Ruimin Sun
With the increasing deployment of Large Language Models (LLMs) on mobile and edge platforms, securing them against model extraction attacks has...
6 months ago cs.CR cs.LG cs.SE
PDF
Attack HIGH
Mohamed ElShehaby, Ashraf Matrawy
Adversarial attacks pose significant challenges to Machine Learning (ML) systems and especially Deep Neural Networks (DNNs) by subtly manipulating...
6 months ago cs.CR cs.LG
PDF
Attack HIGH
Ariana Yi, Ce Zhou, Liyang Xiao +1 more
As object detection models are increasingly deployed in cyber-physical systems such as autonomous vehicles (AVs) and surveillance platforms, ensuring...
6 months ago cs.CV cs.CR
PDF
Attack HIGH
Jia Deng, Jin Li, Zhenhua Zhao +1 more
Vision-Language Models (VLMs), such as CLIP, have demonstrated remarkable zero-shot generalizability across diverse downstream tasks. However, recent...
Attack MEDIUM
Petar Radanliev
Problem Space: AI Vulnerabilities and Quantum Threats Generative AI vulnerabilities: model inversion, data poisoning, adversarial inputs. Quantum...
6 months ago cs.CR cs.AI cs.LG
PDF
Attack HIGH
R. Can Aygun, Yehuda Afek, Anat Bremler-Barr +1 more
With the goal of improving the security of Internet protocols, we seek faster, semi-automatic methods to discover new vulnerabilities in protocols...
6 months ago cs.CR cs.AI cs.NI
PDF
Attack HIGH
Yizhu Wang, Sizhe Chen, Raghad Alkhudair +2 more
When large language model (LLM) agents are increasingly deployed to automate tasks and interact with untrusted external data, prompt injection...
Attack HIGH
Sanskar Amgain, Daniel Lobo, Atri Chatterjee +2 more
The growing use of third-party hardware accelerators (e.g., FPGAs, ASICs) for deep neural networks (DNNs) introduces new security vulnerabilities....
6 months ago cs.CR cs.LG
PDF
Attack HIGH
Zheng Zhang, Jiarui He, Yuchen Cai +4 more
As large language model (LLM) agents increasingly automate complex web tasks, they boost productivity while simultaneously introducing new security...
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial