Attack HIGH
Alexandrine Fortier, Thomas Thebaud, Jesús Villalba +2 more
Large Language Models (LLMs) and their multimodal extensions are becoming increasingly popular. One common approach to enable multimodality is to...
7 months ago cs.CL cs.CR cs.SD
PDF
Defense MEDIUM
Guobin Shen, Dongcheng Zhao, Haibo Tong +3 more
Ensuring Large Language Model (LLM) safety remains challenging due to the absence of universal standards and reliable content validators, making it...
Benchmark MEDIUM
Yicheng Lang, Yihua Zhang, Chongyu Fan +3 more
Large language model (LLM) unlearning aims to surgically remove the influence of undesired data or knowledge from an existing model while preserving...
Benchmark LOW
Chen-An Li, Tzu-Han Lin, Hung-yi Lee
Large audio-language models (LALMs) unify speech and text processing, but their robustness in noisy real-world settings remains underexplored. We...
7 months ago cs.SD cs.CL
PDF
Attack MEDIUM
Yen-Shan Chen, Sian-Yao Huang, Cheng-Lin Yang +1 more
Existing data poisoning attacks on retrieval-augmented generation (RAG) systems scale poorly because they require costly optimization of poisoned...
7 months ago cs.LG cs.CL cs.CR
PDF
Defense HIGH
Shojiro Yamabe, Jun Sakuma
Diffusion language models (DLMs) generate tokens in parallel through iterative denoising, which can reduce latency and enable bidirectional...
7 months ago cs.AI cs.LG
PDF
Benchmark MEDIUM
Andrew Gan, Zahra Ghodsi
Machine learning systems increasingly rely on open-source artifacts such as datasets and models that are created or hosted by other parties. The...
Tool MEDIUM
Hongbo Liu, Jiannong Cao, Bo Yang +7 more
The rapid advancement of large language models (LLMs) in recent years has revolutionized the AI landscape. However, the deployment model and usage of...
7 months ago cs.CR cs.DC
PDF
Attack HIGH
Raik Dankworth, Gesina Schwalbe
Deep neural networks (NNs) for computer vision are vulnerable to adversarial attacks, i.e., miniscule malicious changes to inputs may induce...
7 months ago cs.CR cs.LG
PDF
Attack MEDIUM
Tsubasa Takahashi, Shojiro Yamabe, Futa Waseda +1 more
Differential Attention (DA) has been proposed as a refinement to standard attention, suppressing redundant or noisy context through a subtractive...
7 months ago cs.LG cs.CR
PDF
Attack MEDIUM
Yu Yan, Siqi Lu, Yang Gao +4 more
Recently, Bit-Flip Attack (BFA) has garnered widespread attention for its ability to compromise software system integrity remotely through hardware...
Attack HIGH
Chenxiang Luo, David K. Y. Yau, Qun Song
Federated learning (FL) enables collaborative model training without sharing raw data but is vulnerable to gradient inversion attacks (GIAs), where...
7 months ago cs.CR cs.LG
PDF
Tool MEDIUM
Dalal Alharthi, Ivan Roberto Kawaminami Garcia
Large Language Models (LLMs) have gained prominence in domains including cloud security and forensics. Yet cloud forensic investigations still rely...
7 months ago cs.CR cs.AI cs.LG
PDF
Attack MEDIUM
Dalal Alharthi, Ivan Roberto Kawaminami Garcia
Large language models have gained widespread prominence, yet their vulnerability to prompt injection and other adversarial attacks remains a critical...
7 months ago cs.CR cs.AI cs.LG
PDF
Benchmark HIGH
Haoran Xi, Minghao Shao, Brendan Dolan-Gavitt +2 more
Large language models show promise for vulnerability discovery, yet prevailing methods inspect code in isolation, struggle with long contexts, and...
7 months ago cs.SE cs.CR cs.LG
PDF
Attack MEDIUM
Samar Fares, Nurbek Tastan, Noor Hussein +1 more
Generative models can generate photorealistic images at scale. This raises urgent concerns about the ability to detect synthetically generated images...
7 months ago cs.CV cs.CR cs.LG
PDF
Benchmark MEDIUM
Ehsan Aghaei, Sarthak Jain, Prashanth Arun +1 more
Effective analysis of cybersecurity and threat intelligence data demands language models that can interpret specialized terminology, complex document...
7 months ago cs.CR cs.AI cs.LG
PDF
Attack MEDIUM
Luis Burbano, Diego Ortiz, Qi Sun +5 more
Embodied Artificial Intelligence (AI) promises to handle edge cases in robotic vehicle systems where data is scarce by using common-sense reasoning...
7 months ago cs.CR cs.AI cs.LG
PDF
Tool LOW
João Vitorino, Eva Maia, Isabel Praça +1 more
Due to the susceptibility of Artificial Intelligence (AI) to data perturbations and adversarial examples, it is crucial to perform a thorough...
7 months ago cs.LG cs.CR
PDF
Attack MEDIUM
Anshul Nasery, Edoardo Contente, Alkin Kaz +2 more
Model fingerprinting has emerged as a promising paradigm for claiming model ownership. However, robustness evaluations of these schemes have mostly...
7 months ago cs.CR cs.AI cs.LG
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial