Defense LOW
Abhejay Murali, Saleh Afroogh, Kevin Chen +3 more
Current safety alignment for Large Language Models (LLMs) implicitly optimizes for a "modal adult user," leaving models vulnerable to distributional...
Defense LOW
Yining She, Daniel W. Peterson, Marianne Menglin Liu +4 more
With the increasing adoption of large language models (LLMs), ensuring the safety of LLM systems has become a pressing concern. External LLM-based...
7 months ago cs.CL cs.AI
PDF
Defense LOW
Siwei Han, Kaiwen Xiong, Jiaqi Liu +9 more
As Large Language Model (LLM) agents increasingly gain self-evolutionary capabilities to adapt and refine their strategies through real-world...
7 months ago cs.LG cs.AI
PDF
Defense MEDIUM
Shuai Zhao, Xinyi Wu, Shiqian Zhao +4 more
During fine-tuning, large language models (LLMs) are increasingly vulnerable to data-poisoning backdoor attacks, which compromise their reliability...
7 months ago cs.CR cs.AI cs.CL
PDF
Defense MEDIUM
Anindya Sundar Das, Kangjie Chen, Monowar Bhuyan
Pre-trained language models have achieved remarkable success across a wide range of natural language processing (NLP) tasks, particularly when...
7 months ago cs.CL cs.LG
PDF
Defense MEDIUM
Rui Wu, Yihao Quan, Zeru Shi +3 more
Safety-aligned Large Language Models (LLMs) still show two dominant failure modes: they are easily jailbroken, or they over-refuse harmless inputs...
7 months ago cs.CL cs.LG
PDF
Defense MEDIUM
Lesly Miculicich, Mihir Parmar, Hamid Palangi +4 more
The deployment of autonomous AI agents in sensitive domains, such as healthcare, introduces critical risks to safety, security, and privacy. These...
7 months ago cs.SE cs.AI cs.CR
PDF
Defense MEDIUM
Yuhao Sun, Zhuoer Xu, Shiwen Cui +4 more
Large Language Models (LLMs) have achieved remarkable progress across a wide range of tasks, but remain vulnerable to safety risks such as harmful...
7 months ago cs.AI cs.CR cs.LG
PDF
Defense LOW
Muhammad Faheemur Rahman, Wayne Burleson
Memristive crossbar arrays enable in-memory computing by performing parallel analog computations directly within memory, making them well-suited for...
7 months ago cs.CR cs.AR cs.ET
PDF
Defense MEDIUM
Guobin Shen, Dongcheng Zhao, Haibo Tong +3 more
Ensuring Large Language Model (LLM) safety remains challenging due to the absence of universal standards and reliable content validators, making it...
Defense HIGH
Shojiro Yamabe, Jun Sakuma
Diffusion language models (DLMs) generate tokens in parallel through iterative denoising, which can reduce latency and enable bidirectional...
7 months ago cs.AI cs.LG
PDF
Defense LOW
Boyang Zhang, Istemi Ekin Akkus, Ruichuan Chen +4 more
Multimodal large language models (MLLMs) have demonstrated remarkable capabilities in processing and reasoning over diverse modalities, but their...
7 months ago cs.CR cs.LG
PDF
Defense MEDIUM
Ayda Aghaei Nia
Completely Automated Public Turing tests to tell Computers and Humans Apart (CAPTCHAs) are a foundational component of web security, yet traditional...
7 months ago cs.CR cs.AI
PDF
Defense LOW
Akio Hayakawa, Stefan Bott, Horacio Saggion
Despite their strong performance, large language models (LLMs) face challenges in real-world application of lexical simplification (LS), particularly...
Defense MEDIUM
Zherui Li, Zheng Nie, Zhenhong Zhou +7 more
The rapid advancement of Diffusion Large Language Models (dLLMs) introduces unprecedented vulnerabilities that are fundamentally distinct from...
7 months ago cs.CL cs.AI
PDF
Defense MEDIUM
Gauri Kholkar, Ratinder Ahuja
As autonomous AI agents are used in regulated and safety-critical settings, organizations need effective ways to turn policy into enforceable...
7 months ago cs.CL cs.AI
PDF
Defense MEDIUM
Yuqiao Meng, Luoxi Tang, Feiyang Yu +4 more
Large language models (LLMs) are increasingly used to help security analysts manage the surge of cyber threats, automating tasks from vulnerability...
7 months ago cs.CR cs.AI
PDF
Defense MEDIUM
Zeyu Shen, Basileal Imana, Tong Wu +3 more
Retrieval-Augmented Generation (RAG) enhances Large Language Models by grounding their outputs in external documents. These systems, however, remain...
7 months ago cs.CR cs.AI
PDF
Defense MEDIUM
Charles E. Gagnon, Steven H. H. Ding, Philippe Charland +1 more
Binary code similarity detection is a core task in reverse engineering. It supports malware analysis and vulnerability discovery by identifying...
7 months ago cs.AI cs.CR cs.SE
PDF
Defense LOW
M. Z. Haider, Tayyaba Noreen, M. Salman
Blockchain Business applications and cryptocurrencies such as enable secure, decentralized value transfer, yet their pseudonymous nature creates...
7 months ago cs.LG cs.AI cs.CR
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial