Attack HIGH
Reachal Wang, Yuqi Jia, Neil Zhenqiang Gong
Prompt injection attacks aim to contaminate the input data of an LLM to mislead it into completing an attacker-chosen task instead of the intended...
Attack HIGH
Joshua Ward, Bochao Gu, Chi-Hua Wang +1 more
Large Language Models (LLMs) have recently demonstrated remarkable performance in generating high-quality tabular synthetic data. In practice, two...
5 months ago cs.LG cs.AI
PDF
Defense HIGH
Dyna Soumhane Ouchebara, Stéphane Dupont
The significant increase in software production, driven by the acceleration of development cycles over the past two decades, has led to a steady rise...
5 months ago cs.SE cs.AI cs.CR
PDF
Attack HIGH
Yinan Zhong, Qianhao Miao, Yanjiao Chen +3 more
Large Language Models (LLMs) have been integrated into many applications (e.g., web agents) to perform more sophisticated tasks. However,...
Attack HIGH
Tailun Chen, Yu He, Yan Wang +9 more
Retrieval-Augmented Generation (RAG) systems enhance LLMs with external knowledge but introduce a critical attack surface: corpus poisoning. While...
Attack HIGH
Zafaryab Haider, Md Hafizur Rahman, Shane Moeykens +2 more
Hard-to-detect hardware bit flips, from either malicious circuitry or bugs, have already been shown to make transformers vulnerable in non-generative...
5 months ago cs.LG cs.AI
PDF
Tool HIGH
Jinghao Wang, Ping Zhang, Carter Yagemann
Medical Large Language Models (LLMs) are increasingly deployed for clinical decision support across diverse specialties, yet systematic evaluation of...
5 months ago cs.CR cs.AI
PDF
Attack HIGH
Stephan Carney, Soham Hans, Sofia Hirschmann +4 more
Adversaries (hackers) attempting to infiltrate networks frequently face uncertainty in their operational environments. This research explores the...
5 months ago cs.CR cs.HC
PDF
Attack HIGH
Xiqiao Xiong, Ouxiang Li, Zhuo Liu +5 more
Large language models have seen widespread adoption, yet they remain vulnerable to multi-turn jailbreak attacks, threatening their safe deployment....
5 months ago cs.AI cs.LG
PDF
Attack HIGH
Max Zhang, Derek Liu, Kai Zhang +2 more
Large language models (LLMs) are increasingly deployed worldwide, yet their safety alignment remains predominantly English-centric. This allows for...
Attack HIGH
Yunzhe Li, Jianan Wang, Hongzi Zhu +3 more
Large Language Models (LLMs) have become foundational components in a wide range of applications, including natural language understanding and...
5 months ago cs.CR cs.AI cs.LG
PDF
Attack HIGH
Richard Young
Despite substantial investment in safety alignment, the vulnerability of large language models to sophisticated multi-turn adversarial attacks...
Attack HIGH
Songping Wang, Rufan Qian, Yueming Lyu +5 more
Image-to-Video (I2V) generation synthesizes dynamic visual content from image and text inputs, providing significant creative control. However, the...
Benchmark HIGH
Xiaojun Jia, Jie Liao, Qi Guo +11 more
Recent advances in multi-modal large language models (MLLMs) have enabled unified perception-reasoning capabilities, yet these systems remain highly...
5 months ago cs.CR cs.CV
PDF
Tool HIGH
Saeid Jamshidi, Kawser Wazed Nafi, Arghavan Moradi Dakhel +3 more
The Model Context Protocol (MCP) enables Large Language Models to integrate external tools through structured descriptors, increasing autonomy in...
5 months ago cs.CR cs.AI
PDF
Attack HIGH
Chenyu Zhang, Yiwen Ma, Lanjun Wang +3 more
Text-to-image~(T2I) models commonly incorporate defense mechanisms to prevent the generation of sensitive images. Unfortunately, recent jailbreaking...
5 months ago cs.CR cs.AI cs.CV
PDF
Tool HIGH
Yuhang Huang, Junchao Li, Boyang Ma +6 more
Embodied AI systems integrate language models with real world sensing, mobility, and cloud connected mobile apps. Yet while model jailbreaks have...
5 months ago cs.CR cs.RO
PDF
Benchmark HIGH
Caleb Gross
Security research is fundamentally a problem of resource constraint and consequent prioritization. There is simply too much attack surface and too...
5 months ago cs.CR cs.IR
PDF
Attack HIGH
Shiji Zhao, Shukun Xiong, Yao Huang +7 more
Multimodal Large Language Models (MLLMs) are widely used in various fields due to their powerful cross-modal comprehension and generation...
Attack HIGH
Weikai Lu, Ziqian Zeng, Kehua Zhang +5 more
Multimodal Large Language Models (MLLMs) are increasingly vulnerable to multimodal Indirect Prompt Injection (IPI) attacks, which embed malicious...
5 months ago cs.CR cs.MM
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial