Attack HIGH
Haobo Wang, Weiqi Luo, Xiaojun Jia +1 more
Large vision-language models (VLMs) are vulnerable to transfer-based adversarial perturbations, enabling attackers to optimize on surrogate models...
Attack HIGH
Xiaoyu Wen, Zhida He, Han Qi +7 more
Ensuring robust safety alignment is crucial for Large Language Models (LLMs), yet existing defenses often lag behind evolving adversarial attacks due...
1 months ago cs.AI cs.CL cs.LG
PDF
Attack MEDIUM
Poushali Sengupta, Shashi Raj Pandey, Sabita Maharjan +1 more
Large language models (LLMs) generate outputs by utilizing extensive context, which often includes redundant information from prompts, retrieved...
1 months ago cs.CL cs.AI stat.ML
PDF
Attack MEDIUM
Eliron Rahimi, Elad Hirshel, Rom Himelstein +3 more
Diffusion language models (DLMs) have recently emerged as a promising alternative to autoregressive (AR) models, offering parallel decoding and...
1 months ago cs.LG cs.AI
PDF
Attack HIGH
Ziyue Wang, Jiangshan Yu, Kaihua Qin +3 more
Decentralized Finance (DeFi) has turned blockchains into financial infrastructure, allowing anyone to trade, lend, and build protocols without...
1 months ago cs.CR cs.AI
PDF
Attack HIGH
Terry Yue Zhuo, Yangruibo Ding, Wenbo Guo +1 more
For over a decade, cybersecurity has relied on human labor scarcity to limit attackers to high-value targets manually or generic automated attacks at...
1 months ago cs.CR cs.AI cs.CY
PDF
Attack MEDIUM
Xinyi Hou, Shenao Wang, Yifan Zhang +4 more
Agentic AI systems built around large language models (LLMs) are moving away from closed, single-model frameworks and toward open ecosystems that...
Attack HIGH
Kaiyuan Cui, Yige Li, Yutao Wu +4 more
Vision-language models (VLMs) extend large language models (LLMs) with vision encoders, enabling text generation conditioned on both images and text....
1 months ago cs.LG cs.AI cs.CV
PDF
Attack HIGH
Xueyi Li, Zhuoneng Zhou, Zitao Liu +2 more
Large language models (LLMs) have demonstrated remarkable potential for automatic short answer grading (ASAG), significantly boosting student...
1 months ago cs.CR cs.AI cs.CL
PDF
Attack MEDIUM
Manveer Singh Tamber, Hosna Oyarhoseini, Jimmy Lin
Research on adversarial robustness in language models is currently fragmented across applications and attacks, obscuring shared vulnerabilities. In...
1 months ago cs.CL cs.IR
PDF
Attack HIGH
Licheng Pan, Yunsheng Lu, Jiexi Liu +5 more
Uncovering the mechanisms behind "jailbreaks" in large language models (LLMs) is crucial for enhancing their safety and reliability, yet these...
1 months ago cs.LG cs.AI cs.CR
PDF
Attack HIGH
Md Jahedur Rahman, Ihsen Alouani
Large language models (LLMs) are increasingly used in interactive and retrieval-augmented systems, but they remain vulnerable to task drift;...
1 months ago cs.CR cs.AI
PDF
Attack HIGH
Yuxuan Lu, Yongkang Guo, Yuqing Kong
Safety alignment in Large Language Models (LLMs) often creates a systematic discrepancy between a model's aligned output and the underlying...
1 months ago cs.CL cs.AI cs.CR
PDF
Attack HIGH
Yihang Chen, Zhao Xu, Youyuan Jiang +2 more
Large Vision-Language Models (LVLMs) are increasingly equipped with robust safety safeguards to prevent responses to harmful or disallowed prompts....
1 months ago cs.CV cs.AI cs.CR
PDF
Attack HIGH
Jiate Li, Defu Cao, Li Li +8 more
Large language models (LLMs) have been serving as effective backbones for retrieval systems, including Retrieval-Augmentation-Generation (RAG), Dense...
Attack HIGH
Kunal Mukherjee, Zulfikar Alom, Tran Gia Bao Ngo +2 more
The rise of bot accounts on social media poses significant risks to public discourse. To address this threat, modern bot detectors increasingly rely...
1 months ago cs.LG cs.AI cs.CR
PDF
Attack HIGH
Ye Yu, Haibo Jin, Yaoning Yu +2 more
Large audio-language models increasingly operate on raw speech inputs, enabling more seamless integration across domains such as voice assistants,...
1 months ago cs.CL cs.AI cs.CR
PDF
Attack MEDIUM
Haitham S. Al-Sinani, Chris J. Mitchell
Wireless ethical hacking relies heavily on skilled practitioners manually interpreting reconnaissance results and executing complex, time-sensitive...
1 months ago cs.CR cs.AI
PDF
Attack HIGH
Zhixiang Zhang, Zesen Liu, Yuchong Xie +2 more
Semantic caching has emerged as a pivotal technique for scaling LLM applications, widely adopted by major providers including AWS and Microsoft. By...
1 months ago cs.CR cs.AI
PDF
Attack LOW
Yilong Huang, Songze Li
Diffusion-based face swapping achieves state-of-the-art performance, yet it also exacerbates the potential harm of malicious face swapping to violate...
1 months ago cs.CV cs.CR cs.LG
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial