Attack HIGH
Ariana Yi, Ce Zhou, Liyang Xiao +1 more
As object detection models are increasingly deployed in cyber-physical systems such as autonomous vehicles (AVs) and surveillance platforms, ensuring...
5 months ago cs.CV cs.CR
PDF
Attack HIGH
Jia Deng, Jin Li, Zhenhua Zhao +1 more
Vision-Language Models (VLMs), such as CLIP, have demonstrated remarkable zero-shot generalizability across diverse downstream tasks. However, recent...
Attack MEDIUM
Petar Radanliev
Problem Space: AI Vulnerabilities and Quantum Threats Generative AI vulnerabilities: model inversion, data poisoning, adversarial inputs. Quantum...
5 months ago cs.CR cs.AI cs.LG
PDF
Attack HIGH
R. Can Aygun, Yehuda Afek, Anat Bremler-Barr +1 more
With the goal of improving the security of Internet protocols, we seek faster, semi-automatic methods to discover new vulnerabilities in protocols...
5 months ago cs.CR cs.AI cs.NI
PDF
Attack HIGH
Yizhu Wang, Sizhe Chen, Raghad Alkhudair +2 more
When large language model (LLM) agents are increasingly deployed to automate tasks and interact with untrusted external data, prompt injection...
Attack HIGH
Sanskar Amgain, Daniel Lobo, Atri Chatterjee +2 more
The growing use of third-party hardware accelerators (e.g., FPGAs, ASICs) for deep neural networks (DNNs) introduces new security vulnerabilities....
5 months ago cs.CR cs.LG
PDF
Attack HIGH
Zheng Zhang, Jiarui He, Yuchen Cai +4 more
As large language model (LLM) agents increasingly automate complex web tasks, they boost productivity while simultaneously introducing new security...
Attack HIGH
Isaac Wu, Michael Maslowski
As large language models (LLMs) become integrated into various sensitive applications, prompt injection, the use of prompting to induce harmful...
5 months ago cs.CR cs.AI
PDF
Attack HIGH
Neeladri Bhuiya, Madhav Aggarwal, Diptanshu Purwar
Large Language Models (LLMs) are improving at an exceptional rate. With the advent of agentic workflows, multi-turn dialogue has become the de facto...
5 months ago cs.CR cs.AI cs.CL
PDF
Attack HIGH
Xu Zhang, Hao Li, Zhichao Lu
Multimodal Large Language Models (MLLMs) achieve strong reasoning and perception capabilities but are increasingly vulnerable to jailbreak attacks....
5 months ago cs.CR cs.AI
PDF
Attack HIGH
Vincenzo Carletti, Pasquale Foggia, Carlo Mazzocca +2 more
Federated Learning (FL) enables collaborative training of Machine Learning (ML) models across multiple clients while preserving their privacy. Rather...
5 months ago cs.CR cs.AI
PDF
Attack MEDIUM
Yushi Yang, Shreyansh Padarha, Andrew Lee +1 more
Agentic reinforcement learning (RL) trains large language models to autonomously call tools during reasoning, with search as the most common...
Attack HIGH
Xinkai Wang, Beibei Li, Zerui Shao +3 more
Multimodal large language models (MLLMs) have become integral to a wide range of real-world applications by jointly reasoning over text and visual...
Attack HIGH
Giulia Giusti
The concept of linearity plays a central role in both mathematics and computer science, with distinct yet complementary meanings. In mathematics,...
5 months ago cs.CR cs.LO cs.PL
PDF
Attack MEDIUM
Elias Hossain, Swayamjit Saha, Somshubhra Roy +1 more
Even when prompts and parameters are secured, transformer language models remain vulnerable because their key-value (KV) cache during inference...
5 months ago cs.CR cs.AI
PDF
Attack HIGH
Masahiro Kaneko, Zeerak Talat, Timothy Baldwin
Iterative jailbreak methods that repeatedly rewrite and input prompts into large language models (LLMs) to induce harmful outputs -- using the...
Attack HIGH
Masahiro Kaneko, Timothy Baldwin
Adversarial attacks by malicious users that threaten the safety of large language models (LLMs) can be viewed as attempts to infer a target property...
5 months ago cs.CR cs.CL cs.LG
PDF
Attack HIGH
Mansi Phute, Matthew Hull, Haoran Wang +6 more
Deep learning models deployed in safety critical applications like autonomous driving use simulations to test their robustness against adversarial...
5 months ago cs.CR cs.AI cs.LG
PDF
Attack HIGH
Amirkia Rafiei Oskooei, Mehmet S. Aktas
The proficiency of Large Language Models (LLMs) in processing structured data and adhering to syntactic rules is a capability that drives their...
5 months ago cs.CR cs.AI cs.CL
PDF
Attack MEDIUM
Jie Zhang, Meng Ding, Yang Liu +2 more
We present a novel approach for attacking black-box large language models (LLMs) by exploiting their ability to express confidence in natural...
5 months ago cs.CR cs.LG
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial