Attack HIGH
Yixin Tan, Zhe Yu, Jun Sakuma
Finetuning pretrained large language models (LLMs) has become the standard paradigm for developing downstream applications. However, its security...
5 months ago cs.CR cs.AI
PDF
Attack HIGH
Safwan Shaheer, G. M. Refatul Islam, Mohammad Rafid Hamid +3 more
Prompt injection attacks can compromise the security and stability of critical systems, from infrastructure to large web applications. This work...
5 months ago cs.CR cs.AI
PDF
Attack HIGH
Peichun Hua, Hao Li, Shanghao Shi +2 more
Large Vision-Language Models (LVLMs) are vulnerable to a growing array of multimodal jailbreak attacks, necessitating defenses that are both...
5 months ago cs.CR cs.AI cs.CL
PDF
Attack HIGH
Jie Ma, Junqing Zhang, Guanxiong Shen +2 more
Radio frequency fingerprint identification (RFFI) is an emerging technique for the lightweight authentication of wireless Internet of things (IoT)...
5 months ago cs.CR cs.LG
PDF
Attack HIGH
Jing Cui, Yufei Han, Jianbin Jiao +1 more
Backdoor attacks embed malicious behaviors into Large Language Models (LLMs), enabling adversaries to trigger harmful outputs or bypass safety...
5 months ago cs.CR cs.AI
PDF
Benchmark HIGH
Chaomeng Lu, Bert Lagaisse
Vulnerability detection methods based on deep learning (DL) have shown strong performance on benchmark datasets, yet their real-world effectiveness...
5 months ago cs.CR cs.LG cs.SE
PDF
Survey HIGH
Devanshu Sahoo, Manish Prasad, Vasudev Majhi +5 more
Driven by surging submission volumes, scientific peer review has catalyzed two parallel trends: individual over-reliance on LLMs and institutional...
5 months ago cs.AI cs.CL cs.CR
PDF
Benchmark HIGH
Devanshu Sahoo, Vasudev Majhi, Arjun Neekhra +3 more
The use of Large Language Models (LLMs) as automatic judges for code evaluation is becoming increasingly prevalent in academic environments. But...
5 months ago cs.SE cs.AI
PDF
Attack HIGH
Khurram Khalil, Khaza Anuarul Hoque
Generative Artificial Intelligence models, such as Large Language Models (LLMs) and Large Vision Models (VLMs), exhibit state-of-the-art performance...
5 months ago cs.CR cs.AI
PDF
Attack HIGH
Mohamed Afane, Abhishek Satyam, Ke Chen +3 more
Backdoor attacks create significant security threats to language models by embedding hidden triggers that manipulate model behavior during inference,...
5 months ago cs.CR cs.CL
PDF
Benchmark HIGH
Futa Waseda, Shojiro Yamabe, Daiki Shiono +2 more
Large vision-language models (LVLMs) are vulnerable to typographic attacks, where misleading text within an image overrides visual understanding....
Attack HIGH
Reachal Wang, Yuqi Jia, Neil Zhenqiang Gong
Prompt injection attacks aim to contaminate the input data of an LLM to mislead it into completing an attacker-chosen task instead of the intended...
Attack HIGH
Joshua Ward, Bochao Gu, Chi-Hua Wang +1 more
Large Language Models (LLMs) have recently demonstrated remarkable performance in generating high-quality tabular synthetic data. In practice, two...
5 months ago cs.LG cs.AI
PDF
Defense HIGH
Dyna Soumhane Ouchebara, Stéphane Dupont
The significant increase in software production, driven by the acceleration of development cycles over the past two decades, has led to a steady rise...
5 months ago cs.SE cs.AI cs.CR
PDF
Attack HIGH
Yinan Zhong, Qianhao Miao, Yanjiao Chen +3 more
Large Language Models (LLMs) have been integrated into many applications (e.g., web agents) to perform more sophisticated tasks. However,...
Attack HIGH
Tailun Chen, Yu He, Yan Wang +9 more
Retrieval-Augmented Generation (RAG) systems enhance LLMs with external knowledge but introduce a critical attack surface: corpus poisoning. While...
Attack HIGH
Zafaryab Haider, Md Hafizur Rahman, Shane Moeykens +2 more
Hard-to-detect hardware bit flips, from either malicious circuitry or bugs, have already been shown to make transformers vulnerable in non-generative...
5 months ago cs.LG cs.AI
PDF
Tool HIGH
Jinghao Wang, Ping Zhang, Carter Yagemann
Medical Large Language Models (LLMs) are increasingly deployed for clinical decision support across diverse specialties, yet systematic evaluation of...
5 months ago cs.CR cs.AI
PDF
Attack HIGH
Stephan Carney, Soham Hans, Sofia Hirschmann +4 more
Adversaries (hackers) attempting to infiltrate networks frequently face uncertainty in their operational environments. This research explores the...
5 months ago cs.CR cs.HC
PDF
Attack HIGH
Xiqiao Xiong, Ouxiang Li, Zhuo Liu +5 more
Large language models have seen widespread adoption, yet they remain vulnerable to multi-turn jailbreak attacks, threatening their safe deployment....
5 months ago cs.AI cs.LG
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial