Defense MEDIUM
Hanbin Hong, Ashish Kundu, Ali Payani +2 more
Randomized smoothing has become essential for achieving certified adversarial robustness in machine learning models. However, current methods...
5 months ago cs.LG cs.CR
PDF
Benchmark HIGH
Euodia Dodd, Nataša Krčo, Igor Shilov +1 more
Membership inference attacks (MIAs) have emerged as the standard tool for evaluating the privacy risks of AI models. However, state-of-the-art...
5 months ago cs.LG cs.CR
PDF
Attack HIGH
Mohamed ElShehaby, Ashraf Matrawy
Adversarial attacks pose significant challenges to Machine Learning (ML) systems and especially Deep Neural Networks (DNNs) by subtly manipulating...
5 months ago cs.CR cs.LG
PDF
Attack HIGH
Ariana Yi, Ce Zhou, Liyang Xiao +1 more
As object detection models are increasingly deployed in cyber-physical systems such as autonomous vehicles (AVs) and surveillance platforms, ensuring...
5 months ago cs.CV cs.CR
PDF
Tool MEDIUM
Zhonghao Zhan, Amir Al Sadi, Krinos Li +1 more
In this work, we study security of Model Context Protocol (MCP) agent toolchains and their applications in smart homes. We introduce AegisMCP, a...
Benchmark MEDIUM
Chengcan Wu, Zhixin Zhang, Mingqian Xu +2 more
Large Language Model (LLM)-based Multi-Agent Systems (MAS) have become a popular paradigm of AI applications. However, trustworthiness issues in MAS...
5 months ago cs.CR cs.AI cs.LG
PDF
Attack HIGH
Jia Deng, Jin Li, Zhenhua Zhao +1 more
Vision-Language Models (VLMs), such as CLIP, have demonstrated remarkable zero-shot generalizability across diverse downstream tasks. However, recent...
Attack MEDIUM
Petar Radanliev
Problem Space: AI Vulnerabilities and Quantum Threats Generative AI vulnerabilities: model inversion, data poisoning, adversarial inputs. Quantum...
5 months ago cs.CR cs.AI cs.LG
PDF
Attack HIGH
R. Can Aygun, Yehuda Afek, Anat Bremler-Barr +1 more
With the goal of improving the security of Internet protocols, we seek faster, semi-automatic methods to discover new vulnerabilities in protocols...
5 months ago cs.CR cs.AI cs.NI
PDF
Attack HIGH
Yizhu Wang, Sizhe Chen, Raghad Alkhudair +2 more
When large language model (LLM) agents are increasingly deployed to automate tasks and interact with untrusted external data, prompt injection...
Tool MEDIUM
Thomas Wang, Haowen Li
As large language models (LLMs) are increasingly integrated into real-world applications, ensuring their safety, robustness, and privacy compliance...
5 months ago cs.CR cs.CL
PDF
Attack HIGH
Sanskar Amgain, Daniel Lobo, Atri Chatterjee +2 more
The growing use of third-party hardware accelerators (e.g., FPGAs, ASICs) for deep neural networks (DNNs) introduces new security vulnerabilities....
5 months ago cs.CR cs.LG
PDF
Benchmark LOW
Joydeep Chandra, Satyam Kumar Navneet
Domestic AI agents faces ethical, autonomy, and inclusion challenges, particularly for overlooked groups like children, elderly, and Neurodivergent...
5 months ago cs.HC cs.AI cs.LG
PDF
Benchmark LOW
Sophia Xiao Pu, Sitao Cheng, Xin Eric Wang +1 more
Oversensitivity occurs when language models defensively reject prompts that are actually benign. This behavior not only disrupts user interactions...
Tool HIGH
Sidhant Narula, Javad Rafiei Asl, Mohammad Ghasemigol +2 more
Large Language Models (LLMs) remain vulnerable to multi-turn jailbreak attacks. We introduce HarmNet, a modular framework comprising ThoughtNet, a...
5 months ago cs.CR cs.AI
PDF
Benchmark MEDIUM
Alexander Nemecek, Zebin Yun, Zahra Rahmani +4 more
As large language models (LLMs) become progressively more embedded in clinical decision-support, documentation, and patient-information systems,...
5 months ago cs.CR cs.AI
PDF
Benchmark MEDIUM
Marco Alecci, Jordan Samhi, Tegawendé F. Bissyandé +1 more
Mobile apps often embed authentication secrets, such as API keys, tokens, and client IDs, to integrate with cloud services. However, developers often...
5 months ago cs.CR cs.SE
PDF
Tool HIGH
Zijie Xu, Minfeng Qi, Shiqing Wu +4 more
Multi-agent systems powered by large language models are advancing rapidly, yet the tension between mutual trust and security remains underexplored....
Benchmark MEDIUM
Giovanni De Muri, Mark Vero, Robin Staab +1 more
LLMs are often used by downstream users as teacher models for knowledge distillation, compressing their capabilities into memory-efficient models....
5 months ago cs.LG cs.AI cs.CR
PDF
Benchmark HIGH
Osama Al Haddad, Muhammad Ikram, Ejaz Ahmed +1 more
Security analysts face increasing pressure to triage large and complex vulnerability backlogs. Large Language Models (LLMs) offer a potential aid by...
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial