Attack HIGH
Yanxi Li, Ruocheng Shan
Large language models are increasingly used for text classification tasks such as sentiment analysis, yet their reliance on natural language prompts...
5 months ago cs.CL cs.AI
PDF
Attack HIGH
Yanting Wang, Runpeng Geng, Jinghui Chen +2 more
Many recent studies showed that LLMs are vulnerable to jailbreak attacks, where an attacker can perturb the input of an LLM to induce it to generate...
Attack HIGH
Pinaki Prasad Guha Neogi, Ahmad Mohammadshirazi, Dheeraj Kulshrestha +1 more
Mixture-of-Experts (MoE) architectures are increasingly adopted in large language models (LLMs) for their scalability and efficiency. However, their...
5 months ago cs.LG cs.AI
PDF
Attack HIGH
Junrui Zhang, Xinyu Zhao, Jie Peng +3 more
Multimodal learning has shown significant superiority on various tasks by integrating multiple modalities. However, the interdependencies among...
5 months ago cs.LG cs.CR
PDF
Attack MEDIUM
Itay Hazan, Yael Mathov, Guy Shtar +2 more
Securing AI agents powered by Large Language Models (LLMs) represents one of the most critical challenges in AI security today. Unlike traditional...
Attack HIGH
Oluleke Babayomi, Dong-Seong Kim
Electric Vehicle (EV) charging infrastructure faces escalating cybersecurity threats that can severely compromise operational efficiency and grid...
5 months ago cs.LG cs.CR
PDF
Attack HIGH
Yunyi Zhang, Shibo Cui, Baojun Liu +4 more
LLM applications (i.e., LLM apps) leverage the powerful capabilities of LLMs to provide users with customized services, revolutionizing traditional...
Attack HIGH
Zhiyuan Xu, Stanislav Abaimov, Joseph Gardiner +1 more
Modern large language models (LLMs) are typically secured by auditing data, prompts, and refusal policies, while treating the forward pass as an...
Attack MEDIUM
Atharv Singh Patlan, Peiyao Sheng, S. Ashwin Hebbar +2 more
Language agents are rapidly expanding from single-user assistants to multi-user collaborators in shared workspaces and groups. However, today's...
5 months ago cs.CR cs.AI cs.CL
PDF
Attack MEDIUM
Tom Perel
The recent boom and rapid integration of Large Language Models (LLMs) into a wide range of applications warrants a deeper understanding of their...
5 months ago cs.CR cs.AI
PDF
Attack HIGH
Zhen Sun, Zongmin Zhang, Deqi Liang +8 more
As LLMs become more common, non-expert users can pose risks, prompting extensive research into jailbreak attacks. However, most existing black-box...
5 months ago cs.CR cs.AI
PDF
Attack MEDIUM
Huseein Jawad, Nicolas Brunel
System prompts are critical for guiding the behavior of Large Language Models (LLMs), yet they often contain proprietary logic or sensitive...
5 months ago cs.CR cs.CL
PDF
Attack HIGH
Yijun Yang, Lichao Wang, Jianping Zhang +3 more
The growing misuse of Vision-Language Models (VLMs) has led providers to deploy multiple safeguards, including alignment tuning, system prompts, and...
Attack HIGH
Yige Li, Zhe Li, Wei Zhao +4 more
Backdoor attacks pose a serious threat to the secure deployment of large language models (LLMs), enabling adversaries to implant hidden behaviors...
5 months ago cs.CR cs.AI
PDF
Attack HIGH
Zhihan Ren, Lijun He, Jiaxi Liang +3 more
Split DNNs enable edge devices by offloading intensive computation to a cloud server, but this paradigm exposes privacy vulnerabilities, as the...
Attack HIGH
Piercosma Bisconti, Matteo Prandi, Federico Pierucci +7 more
We present evidence that adversarial poetry functions as a universal single-turn jailbreak technique for Large Language Models (LLMs). Across 25...
5 months ago cs.CL cs.AI
PDF
Attack HIGH
Badrinath Ramakrishnan, Akshaya Balaji
Retrieval-augmented generation (RAG) systems have become widely used for enhancing large language model capabilities, but they introduce significant...
5 months ago cs.CR cs.AI
PDF
Attack HIGH
Xin Yi, Yue Li, Dongsheng Shi +3 more
Large Language Models (LLMs) are increasingly integrated into educational applications. However, they remain vulnerable to jailbreak and fine-tuning...
Attack HIGH
Zhengchunmin Dai, Jiaxiong Tang, Peng Sun +2 more
In decentralized machine learning paradigms such as Split Federated Learning (SFL) and its variant U-shaped SFL, the server's capabilities are...
5 months ago cs.CR cs.AI cs.LG
PDF
Attack HIGH
Eric Xue, Ruiyi Zhang, Pengtao Xie
Modern language models remain vulnerable to backdoor attacks via poisoned data, where training inputs containing a trigger are paired with a target...
5 months ago cs.CR cs.CL cs.LG
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial