Benchmark LOW
Aishwarya Agarwal, Srikrishna Karanam, Vineet Gandhi
Contrastive vision-language models (VLMs) such as CLIP achieve strong zero-shot recognition yet remain vulnerable to spurious correlations,...
Benchmark MEDIUM
Minjie Wang, Jinguang Han, Weizhi Meng
In federated learning, multiple parties can cooperate to train the model without directly exchanging their own private data, but the gradient leakage...
5 months ago cs.CR cs.AI
PDF
Defense LOW
Mohammad Marufur Rahman, Guanchu Wang, Kaixiong Zhou +2 more
Catastrophic forgetting is a longstanding challenge in continual learning, where models lose knowledge from earlier tasks when learning new ones....
5 months ago cs.LG cs.AI
PDF
Attack MEDIUM
Ayush Chaudhary, Sisir Doppalpudi
The deployment of robust malware detection systems in big data environments requires careful consideration of both security effectiveness and...
5 months ago cs.CR cs.LG
PDF
Attack MEDIUM
Thomas Rivasseau
Current Large Language Model alignment research mostly focuses on improving model robustness against adversarial attacks and misbehavior by training...
5 months ago cs.CL cs.CR
PDF
Attack HIGH
Mukkesh Ganesh, Kaushik Iyer, Arun Baalaaji Sankar Ananthan
The Key Value(KV) cache is an important component for efficient inference in autoregressive Large Language Models (LLMs), but its role as a...
5 months ago cs.CR cs.AI
PDF
Attack HIGH
Yunhao Chen, Xin Wang, Juncheng Li +5 more
Automated red teaming frameworks for Large Language Models (LLMs) have become increasingly sophisticated, yet they share a fundamental limitation:...
5 months ago cs.CL cs.CR
PDF
Tool LOW
Samuel Nathanson, Alexander Lee, Catherine Chen Kieffer +7 more
Assurance for artificial intelligence (AI) systems remains fragmented across software supply-chain security, adversarial machine learning, and...
5 months ago cs.CR cs.AI cs.LG
PDF
Tool MEDIUM
Rathin Chandra Shit, Sharmila Subudhi
The security of autonomous vehicle networks is facing major challenges, owing to the complexity of sensor integration, real-time performance demands,...
5 months ago cs.CR cs.AI cs.LG
PDF
Attack HIGH
Haotian Jin, Yang Li, Haihui Fan +3 more
Backdoor attacks pose a serious threat to the security of large language models (LLMs), causing them to exhibit anomalous behavior under specific...
5 months ago cs.CR cs.AI
PDF
Attack HIGH
Samuel Nathanson, Rebecca Williams, Cynthia Matuszek
Large language models (LLMs) increasingly operate in multi-agent and safety-critical settings, raising open questions about how their vulnerabilities...
5 months ago cs.LG cs.AI cs.CL
PDF
Defense MEDIUM
JoonHo Lee, HyeonMin Cho, Jaewoong Yun +3 more
We present SGuard-v1, a lightweight safety guardrail for Large Language Models (LLMs), which comprises two specialized models to detect harmful...
5 months ago cs.CL cs.AI cs.CR
PDF
Attack MEDIUM
Onkar Shelar, Travis Desell
Large Language Models remain vulnerable to adversarial prompts that elicit toxic content even after safety alignment. We present ToxSearch, a...
5 months ago cs.NE cs.AI cs.CL
PDF
Attack HIGH
Jiaji Ma, Puja Trivedi, Danai Koutra
Text-attributed graphs (TAGs), which combine structural and textual node information, are ubiquitous across many domains. Recent work integrates...
5 months ago cs.CR cs.LG
PDF
Attack MEDIUM
Yuting Tan, Yi Huang, Zhuo Li
Backdoor attacks on large language models (LLMs) typically couple a secret trigger to an explicit malicious output. We show that this explicit...
5 months ago cs.LG cs.CR
PDF
Benchmark LOW
Yikun Li, Matteo Grella, Daniel Nahmias +5 more
In recent years, Infrastructure as Code (IaC) has emerged as a critical approach for managing and provisioning IT infrastructure through code and...
5 months ago cs.CR cs.SE
PDF
Attack HIGH
Hasini Jayathilaka
Prompt injection attacks are an emerging threat to large language models (LLMs), enabling malicious users to manipulate outputs through carefully...
Attack HIGH
Rui Wang, Zeming Wei, Xiyue Zhang +1 more
Deep Neural Networks (DNNs) are known to be vulnerable to various adversarial perturbations. To address the safety concerns arising from these...
5 months ago cs.LG cs.AI cs.CR
PDF
Attack HIGH
Gil Goren, Shahar Katz, Lior Wolf
Large Language Models (LLMs) are vulnerable to adversarial attacks that bypass safety guidelines and generate harmful content. Mitigating these...
Defense HIGH
Jie Chen, Liangmin Wang
Fuzzing is a widely used technique for detecting vulnerabilities in smart contracts, which generates transaction sequences to explore the execution...
5 months ago cs.CR cs.SE
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial