GCG Attack On A Diffusion LLM
Ruben Neyroud, Sam Corley
While most LLMs are autoregressive, diffusion-based LLMs have recently emerged as an alternative method for generation. Greedy Coordinate Gradient...
2,104+ academic papers on AI security, attacks, and defenses
Showing 1041–1060 of 2,104 papers
Ruben Neyroud, Sam Corley
While most LLMs are autoregressive, diffusion-based LLMs have recently emerged as an alternative method for generation. Greedy Coordinate Gradient...
Yalin E. Sagduyu, Tugba Erpek, Aylin Yener +1 more
Semantic communications conveys task-relevant meaning rather than focusing solely on message reconstruction, improving bandwidth efficiency and...
Jingyu Zhang
Customer-service LLM agents increasingly make policy-bound decisions (refunds, rebooking, billing disputes), but the same ``helpful'' interaction...
Zhe Huang, Hao Wen, Aiming Hao +6 more
Multimodal Large Language Models (MLLMs) have made remarkable progress in video understanding. However, they suffer from a critical vulnerability: an...
Pankayaraj Pathmanathan, Michael-Andrei Panaitescu-Liess, Cho-Yu Jason Chiang +1 more
Retrieval-Augmented Generation (RAG) has emerged as a promising paradigm to enhance large language models (LLMs) with external knowledge, reducing...
Giuseppe Canale, Kashyap Thimmaraju
Large Language Models (LLMs) are rapidly transitioning from conversational assistants to autonomous agents embedded in critical organizational...
Yuan Xin, Dingfan Chen, Linyi Yang +2 more
As large language models (LLMs) are increasingly deployed, ensuring their safe use is paramount. Jailbreaking, adversarial prompts that bypass model...
Ruixuan Huang, Qingyue Wang, Hantao Huang +4 more
Mixture-of-Experts architectures have become the standard for scaling large language models due to their superior parameter efficiency. To...
Hung-Fu Chang, MohammadShokrolah Shirazi, Lizhou Cao +1 more
Recent advances in large language models (LLMs) have introduced new paradigms in software development, including vibe coding, AI-assisted coding, and...
Roee Ziv, Raz Lapid, Moshe Sipper
Audio-language models combine audio encoders with large language models to enable multimodal reasoning, but they also introduce new security...
Samaresh Kumar Singh, Joyjit Roy, Martin So
Recent attacks on critical infrastructure, including the 2021 Oldsmar water treatment breach and 2023 Danish energy sector compromises, highlight...
Panagiotis Theocharopoulos, Ajinkya Kulkarni, Mathew Magimai. -Doss
Large language models (LLMs) are increasingly considered for use in high-impact workflows, including academic peer review. However, LLMs are...
Heba Osama, Omar Elebiary, Youssef Qassim +4 more
Web applications increasingly face evasive and polymorphic attack payloads, yet traditional web application firewalls (WAFs) based on static rule...
Toqeer Ali Syed, Mishal Ateeq Almutairi, Mahmoud Abdel Moaty
Powerful autonomous systems, which reason, plan, and converse using and between numerous tools and agents, are made possible by Large Language Models...
Hazel Kim, Philip Torr
Large language models (LLMs) are highly vulnerable to input confirmation bias. When a prompt implies a preferred answer, models often reinforce that...
Alessio Benavoli, Alessandro Facchini, Marco Zaffalon
How can we ensure that AI systems are aligned with human values and remain safe? We can study this problem through the frameworks of the AI...
Toqeer Ali Syed, Mohammad Riyaz Belgaum, Salman Jan +2 more
The software supply chain attacks are becoming more and more focused on trusted development and delivery procedures, so the conventional post-build...
Manu, Yi Guo, Kanchana Thilakarathna +5 more
Large Language Models (LLMs) can be driven into over-generation, emitting thousands of tokens before producing an end-of-sequence (EOS) token. This...
Jiawei Liu, Zhuo Chen, Rui Zhu +4 more
Neural ranking models have achieved remarkable progress and are now widely deployed in real-world applications such as Retrieval-Augmented Generation...
Xingwei Ma, Shiyang Feng, Bo Zhang +1 more
Remote sensing change detection (RSCD), a complex multi-image inference task, traditionally uses pixel-based operators or encoder-decoder networks...
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act), and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial