Attack HIGH
Oleg Brodt, Elad Feldman, Bruce Schneier +1 more
Prompt injection was initially framed as the large language model (LLM) analogue of SQL injection. However, over the past three years, attacks...
4 months ago cs.CR cs.AI
PDF
Attack HIGH
Zhiyi Mou, Jingyuan Yang, Zeheng Qian +6 more
While Large Language Models (LLMs) have powerful capabilities, they remain vulnerable to jailbreak attacks, which is a critical barrier to their safe...
Attack HIGH
Xiaonan Liu, Zhihao Li, Xiao Lan +3 more
Capture-the-Flag (CTF) competitions play a central role in modern cybersecurity as a platform for training practitioners and evaluating offensive and...
Attack HIGH
Fengchao Chen, Tingmin Wu, Van Nguyen +1 more
Large Language Models (LLMs) have enabled agents to move beyond conversation toward end-to-end task execution and become more helpful. However, this...
Benchmark HIGH
Shaznin Sultana, Sadia Afreen, Nasir U. Eisty
Context: Traditional software security analysis methods struggle to keep pace with the scale and complexity of modern codebases, requiring...
Attack HIGH
Mohammed Himayath Ali, Mohammed Aqib Abdullah, Mohammed Mudassir Uddin +1 more
Large Language Models have emerged as transformative tools for Security Operations Centers, enabling automated log analysis, phishing triage, and...
4 months ago cs.CR cs.CV
PDF
Attack HIGH
Xinyi Wu, Geng Hong, Yueyue Chen +5 more
Web agents, powered by large language models (LLMs), are increasingly deployed to automate complex web interactions. The rise of open-source...
4 months ago cs.CR cs.AI
PDF
Attack HIGH
Shawn Li, Chenxiao Yu, Zhiyu Ni +4 more
Large language models (LLMs) are increasingly deployed in security-sensitive applications, where they must follow system- or developer-specified...
4 months ago cs.CR cs.AI
PDF
Tool HIGH
Hongyan Chang, Ergute Bao, Xinjian Luo +1 more
Large language models (LLMs) increasingly rely on retrieving information from external corpora. This creates a new attack surface: indirect prompt...
4 months ago cs.CR cs.AI
PDF
Tool HIGH
Harshil Parmar, Pushti Vyas, Prayers Khristi +1 more
As vulnerability research increasingly adopts generative AI, a critical reliance on opaque model outputs has emerged, creating a "trust gap" in...
4 months ago cs.CR cs.AI cs.SE
PDF
Survey HIGH
Masahiro Kaneko
The use of large language models (LLMs) in peer review systems has attracted growing attention, making it essential to examine their potential...
4 months ago cs.CL cs.AI cs.LG
PDF
Attack HIGH
Muhammad Wahid Akram, Keshav Sood, Muneeb Ul Hassan +1 more
Phishing with Quick Response (QR) codes is termed as Quishing. The attackers exploit this method to manipulate individuals into revealing their...
4 months ago cs.CR cs.LG
PDF
Attack HIGH
Quan Minh Nguyen, Min-Seon Kim, Hoang M. Ngo +3 more
Membership inference attack (MIA) poses a significant privacy threat in federated learning (FL) as it allows adversaries to determine whether a...
4 months ago cs.LG cs.CR
PDF
Attack HIGH
Hongjun An, Yiliang Song, Jiangan Chen +3 more
Large Language Model (LLM) training often optimizes for preference alignment, rewarding outputs that are perceived as helpful and...
4 months ago cs.CR cs.AI
PDF
Attack HIGH
Víctor Mayoral-Vilches, María Sanz-Gómez, Francesco Balassone +6 more
AI-driven penetration testing now executes thousands of actions per hour but still lacks the strategic intuition humans apply in competitive...
Tool HIGH
Junda Lin, Zhaomeng Zhou, Zhi Zheng +4 more
LLM agents operating in open environments face escalating risks from indirect prompt injection, particularly within the tool stream where manipulated...
4 months ago cs.CR cs.AI
PDF
Attack HIGH
Ahmad Alobaid, Martí Jordà Roca, Carlos Castillo +1 more
The availability of Large Language Models (LLMs) has led to a new generation of powerful chatbots that can be developed at relatively low cost. As...
4 months ago cs.CR cs.AI
PDF
Tool HIGH
Jingxiao Yang, Ping He, Tianyu Du +2 more
Recent advances in software vulnerability detection have been driven by Language Model (LM)-based approaches. However, these models remain vulnerable...
4 months ago cs.CR cs.AI
PDF
Attack HIGH
Balachandra Devarangadi Sunil, Isheeta Sinha, Piyush Maheshwari +3 more
Large language model agents equipped with persistent memory are vulnerable to memory poisoning attacks, where adversaries inject malicious...
4 months ago cs.CR cs.MA
PDF
Tool HIGH
Zhaoqi Wang, Zijian Zhang, Daqing He +5 more
Large language models (LLMs) have demonstrated remarkable capabilities across diverse applications, however, they remain critically vulnerable to...
4 months ago cs.CR cs.AI
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial