Attack HIGH
Zafaryab Haider, Md Hafizur Rahman, Shane Moeykens +2 more
Hard-to-detect hardware bit flips, from either malicious circuitry or bugs, have already been shown to make transformers vulnerable in non-generative...
5 months ago cs.LG cs.AI
PDF
Attack HIGH
Stephan Carney, Soham Hans, Sofia Hirschmann +4 more
Adversaries (hackers) attempting to infiltrate networks frequently face uncertainty in their operational environments. This research explores the...
5 months ago cs.CR cs.HC
PDF
Attack HIGH
Xiqiao Xiong, Ouxiang Li, Zhuo Liu +5 more
Large language models have seen widespread adoption, yet they remain vulnerable to multi-turn jailbreak attacks, threatening their safe deployment....
5 months ago cs.AI cs.LG
PDF
Attack HIGH
Max Zhang, Derek Liu, Kai Zhang +2 more
Large language models (LLMs) are increasingly deployed worldwide, yet their safety alignment remains predominantly English-centric. This allows for...
Attack HIGH
Yunzhe Li, Jianan Wang, Hongzi Zhu +3 more
Large Language Models (LLMs) have become foundational components in a wide range of applications, including natural language understanding and...
5 months ago cs.CR cs.AI cs.LG
PDF
Attack HIGH
Richard Young
Despite substantial investment in safety alignment, the vulnerability of large language models to sophisticated multi-turn adversarial attacks...
Attack HIGH
Songping Wang, Rufan Qian, Yueming Lyu +5 more
Image-to-Video (I2V) generation synthesizes dynamic visual content from image and text inputs, providing significant creative control. However, the...
Attack HIGH
Chenyu Zhang, Yiwen Ma, Lanjun Wang +3 more
Text-to-image~(T2I) models commonly incorporate defense mechanisms to prevent the generation of sensitive images. Unfortunately, recent jailbreaking...
5 months ago cs.CR cs.AI cs.CV
PDF
Attack HIGH
Shiji Zhao, Shukun Xiong, Yao Huang +7 more
Multimodal Large Language Models (MLLMs) are widely used in various fields due to their powerful cross-modal comprehension and generation...
Attack HIGH
Weikai Lu, Ziqian Zeng, Kehua Zhang +5 more
Multimodal Large Language Models (MLLMs) are increasingly vulnerable to multimodal Indirect Prompt Injection (IPI) attacks, which embed malicious...
5 months ago cs.CR cs.MM
PDF
Attack HIGH
Fan Yang
Large Language Models (LLMs) have demonstrated exceptional performance across various tasks, but their security vulnerabilities can be exploited by...
5 months ago cs.CR cs.AI
PDF
Attack HIGH
Jun Leng, Yu Liu, Litian Zhang +3 more
Large Language Models (LLMs) serve as the backbone of modern AI systems, yet they remain susceptible to adversarial jailbreak attacks. Consequently,...
Attack HIGH
Yuan Xiong, Ziqi Miao, Lijun Li +3 more
While Multimodal Large Language Models (MLLMs) show remarkable capabilities, their safety alignments are susceptible to jailbreak attacks. Existing...
5 months ago cs.CV cs.CL cs.CR
PDF
Attack HIGH
Afshin Khadangi, Hanna Marxen, Amir Sartipi +2 more
Frontier large language models (LLMs) such as ChatGPT, Grok and Gemini are increasingly used for mental-health support with anxiety, trauma and...
5 months ago cs.CY cs.AI
PDF
Attack HIGH
Ziyi Tong, Feifei Sun, Le Minh Nguyen
Large Multimodal Language Models (MLLMs) are emerging as one of the foundational tools in an expanding range of applications. Consequently,...
5 months ago cs.CR cs.AI
PDF
Attack HIGH
Yuanhe Zhang, Weiliu Wang, Zhenhong Zhou +5 more
Large Language Model (LLM)-based agents have demonstrated remarkable capabilities in reasoning, planning, and tool usage. The recently proposed Model...
5 months ago cs.CR cs.CL
PDF
Attack HIGH
Haowei Fu, Bo Ni, Han Xu +3 more
Retrieval-Augmented Generation (RAG) and Supervised Finetuning (SFT) have become the predominant paradigms for equipping Large Language Models (LLMs)...
5 months ago cs.CR cs.AI
PDF
Attack HIGH
Omar Farooq Khan Suri, John McCrae
Large Language Models (LLMs) are increasingly being deployed in real-world applications, but their flexibility exposes them to prompt injection...
5 months ago cs.CR cs.CL cs.LG
PDF
Attack HIGH
Zihao Wang, Kar Wai Fok, Vrizlynn L. L. Thing
Multi-modal large language models (MLLMs), capable of processing text, images, and audio, have been widely adopted in various AI applications....
Attack HIGH
Mintong Kang, Chong Xiang, Sanjay Kariyappa +3 more
Indirect prompt injection attacks (IPIAs), where large language models (LLMs) follow malicious instructions hidden in input data, pose a critical...
5 months ago cs.CR cs.LG
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial