Attack HIGH
Jing Cui, Yufei Han, Jianbin Jiao +1 more
Backdoor attacks embed malicious behaviors into Large Language Models (LLMs), enabling adversaries to trigger harmful outputs or bypass safety...
3 months ago cs.CR cs.AI
PDF
Attack HIGH
Khurram Khalil, Khaza Anuarul Hoque
Generative Artificial Intelligence models, such as Large Language Models (LLMs) and Large Vision Models (VLMs), exhibit state-of-the-art performance...
3 months ago cs.CR cs.AI
PDF
Attack HIGH
Mohamed Afane, Abhishek Satyam, Ke Chen +3 more
Backdoor attacks create significant security threats to language models by embedding hidden triggers that manipulate model behavior during inference,...
3 months ago cs.CR cs.CL
PDF
Attack HIGH
Reachal Wang, Yuqi Jia, Neil Zhenqiang Gong
Prompt injection attacks aim to contaminate the input data of an LLM to mislead it into completing an attacker-chosen task instead of the intended...
Attack HIGH
Joshua Ward, Bochao Gu, Chi-Hua Wang +1 more
Large Language Models (LLMs) have recently demonstrated remarkable performance in generating high-quality tabular synthetic data. In practice, two...
3 months ago cs.LG cs.AI
PDF
Attack HIGH
Yinan Zhong, Qianhao Miao, Yanjiao Chen +3 more
Large Language Models (LLMs) have been integrated into many applications (e.g., web agents) to perform more sophisticated tasks. However,...
Attack HIGH
Tailun Chen, Yu He, Yan Wang +9 more
Retrieval-Augmented Generation (RAG) systems enhance LLMs with external knowledge but introduce a critical attack surface: corpus poisoning. While...
Attack HIGH
Zafaryab Haider, Md Hafizur Rahman, Shane Moeykens +2 more
Hard-to-detect hardware bit flips, from either malicious circuitry or bugs, have already been shown to make transformers vulnerable in non-generative...
3 months ago cs.LG cs.AI
PDF
Attack HIGH
Stephan Carney, Soham Hans, Sofia Hirschmann +4 more
Adversaries (hackers) attempting to infiltrate networks frequently face uncertainty in their operational environments. This research explores the...
3 months ago cs.CR cs.HC
PDF
Attack HIGH
Xiqiao Xiong, Ouxiang Li, Zhuo Liu +5 more
Large language models have seen widespread adoption, yet they remain vulnerable to multi-turn jailbreak attacks, threatening their safe deployment....
3 months ago cs.AI cs.LG
PDF
Attack HIGH
Max Zhang, Derek Liu, Kai Zhang +2 more
Large language models (LLMs) are increasingly deployed worldwide, yet their safety alignment remains predominantly English-centric. This allows for...
Attack HIGH
Yunzhe Li, Jianan Wang, Hongzi Zhu +3 more
Large Language Models (LLMs) have become foundational components in a wide range of applications, including natural language understanding and...
3 months ago cs.CR cs.AI cs.LG
PDF
Attack HIGH
Richard Young
Despite substantial investment in safety alignment, the vulnerability of large language models to sophisticated multi-turn adversarial attacks...
Attack HIGH
Songping Wang, Rufan Qian, Yueming Lyu +5 more
Image-to-Video (I2V) generation synthesizes dynamic visual content from image and text inputs, providing significant creative control. However, the...
Attack HIGH
Chenyu Zhang, Yiwen Ma, Lanjun Wang +3 more
Text-to-image~(T2I) models commonly incorporate defense mechanisms to prevent the generation of sensitive images. Unfortunately, recent jailbreaking...
3 months ago cs.CR cs.AI cs.CV
PDF
Attack HIGH
Shiji Zhao, Shukun Xiong, Yao Huang +7 more
Multimodal Large Language Models (MLLMs) are widely used in various fields due to their powerful cross-modal comprehension and generation...
Attack HIGH
Weikai Lu, Ziqian Zeng, Kehua Zhang +5 more
Multimodal Large Language Models (MLLMs) are increasingly vulnerable to multimodal Indirect Prompt Injection (IPI) attacks, which embed malicious...
3 months ago cs.CR cs.MM
PDF
Attack HIGH
Fan Yang
Large Language Models (LLMs) have demonstrated exceptional performance across various tasks, but their security vulnerabilities can be exploited by...
3 months ago cs.CR cs.AI
PDF
Attack HIGH
Jun Leng, Yu Liu, Litian Zhang +3 more
Large Language Models (LLMs) serve as the backbone of modern AI systems, yet they remain susceptible to adversarial jailbreak attacks. Consequently,...
Attack HIGH
Yuan Xiong, Ziqi Miao, Lijun Li +3 more
While Multimodal Large Language Models (MLLMs) show remarkable capabilities, their safety alignments are susceptible to jailbreak attacks. Existing...
3 months ago cs.CV cs.CL cs.CR
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial