GCG Attack On A Diffusion LLM
Ruben Neyroud, Sam Corley
While most LLMs are autoregressive, diffusion-based LLMs have recently emerged as an alternative method for generation. Greedy Coordinate Gradient...
2,560+ academic papers on AI security, attacks, and defenses
Showing 361–380 of 635 papers
Clear filtersRuben Neyroud, Sam Corley
While most LLMs are autoregressive, diffusion-based LLMs have recently emerged as an alternative method for generation. Greedy Coordinate Gradient...
Yuan Xin, Dingfan Chen, Linyi Yang +2 more
As large language models (LLMs) are increasingly deployed, ensuring their safe use is paramount. Jailbreaking, adversarial prompts that bypass model...
Roee Ziv, Raz Lapid, Moshe Sipper
Audio-language models combine audio encoders with large language models to enable multimodal reasoning, but they also introduce new security...
Jiawei Liu, Zhuo Chen, Rui Zhu +4 more
Neural ranking models have achieved remarkable progress and are now widely deployed in real-world applications such as Retrieval-Augmented Generation...
Zhen Liang, Hai Huang, Zhengkui Chen
Large language models (LLMs), such as ChatGPT, have achieved remarkable success across a wide range of fields. However, their trustworthiness remains...
Soham Padia, Dhananjay Vaidya, Ramchandra Mangrulkar
Securing blockchain-enabled IoT networks against sophisticated adversarial attacks remains a critical challenge. This paper presents a trust-based...
Zongmin Zhang, Zhen Sun, Yifan Liao +5 more
Prompt-driven Video Segmentation Foundation Models (VSFMs) such as SAM2 are increasingly deployed in applications like autonomous driving and digital...
Mengqi He, Xinyu Tian, Xin Shen +4 more
Vision-language models (VLMs) achieve remarkable performance but remain vulnerable to adversarial attacks. Entropy, a measure of model uncertainty,...
Duo Chai, Zizhen Liu, Shuhuai Wang +4 more
Large language models (LLMs) are highly compute- and memory-intensive, posing significant demands on high-performance GPUs. At the same time,...
Tianwei Lan, Farid Naït-Abdesselam
The rapid growth in both the scale and complexity of Android malware has driven the widespread adoption of machine learning (ML) techniques for...
Xinjie Xu, Shuyu Cheng, Dongwei Xu +2 more
In hard-label black-box adversarial attacks, where only the top-1 predicted label is accessible, the prohibitive query complexity poses a major...
Lichao Wu, Sasha Behrouzi, Mohamadreza Rostami +2 more
Mixture-of-Experts (MoE) architectures have advanced the scaling of Large Language Models (LLMs) by activating only a sparse subset of parameters per...
Yihan Wang, Huanqi Yang, Shantanu Pal +1 more
The integration of Large Language Models (LLMs) into wearable sensing is creating a new class of mobile applications capable of nuanced human...
Omer Gazit, Yael Itzhakev, Yuval Elovici +1 more
Radio frequency (RF) based systems are increasingly used to detect drones by analyzing their RF signal patterns, converting them into spectrogram...
Linzhi Chen, Yang Sun, Hongru Wei +1 more
Low-Rank Adaptation (LoRA) has emerged as an efficient method for fine-tuning large language models (LLMs) and is widely adopted within the...
Sameera K. M., Serena Nicolazzo, Antonino Nocera +2 more
Federated Learning (FL) has recently emerged as a revolutionary approach to collaborative training Machine Learning models. In particular, it enables...
Akshaj Prashanth Rao, Advait Singh, Saumya Kumaar Saksena +1 more
Prompt injection and jailbreaking attacks pose persistent security challenges to large language model (LLM)-based systems. We present PromptScreen,...
Jianyi Zhang, Shizhao Liu, Ziyin Zhou +1 more
The rapid advancement of large language models (LLMs) has intensified concerns about the robustness of their safety alignment. While existing...
Huixin Zhan
Genomic Foundation Models (GFMs), such as Evolutionary Scale Modeling (ESM), have demonstrated remarkable success in variant effect prediction....
Kai Hu, Abhinav Aggarwal, Mehran Khodabandeh +6 more
This paper introduces Jailbreak-Zero, a novel red teaming methodology that shifts the paradigm of Large Language Model (LLM) safety evaluation from a...
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act), and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial