Defense HIGH
Jonah Ghebremichael, Saastha Vasan, Saad Ullah +6 more
Static Application Security Testing (SAST) tools using taint analysis are widely viewed as providing higher-quality vulnerability detection results...
2 months ago cs.CR cs.SE
PDF
Defense HIGH
Hao Wang, Yanting Wang, Hao Li +2 more
Large Language Models (LLMs) have achieved remarkable capabilities but remain vulnerable to adversarial ``jailbreak'' attacks designed to bypass...
2 months ago cs.CR cs.CL
PDF
Attack HIGH
Yinzhi Zhao, Ming Wang, Shi Feng +3 more
Large language models (LLMs) have achieved impressive performance across natural language tasks and are increasingly deployed in real-world...
2 months ago cs.AI cs.CL
PDF
Attack HIGH
Yuansen Liu, Yixuan Tang, Anthony Kum Hoe Tun
Current LLM safety research predominantly focuses on mitigating Goal Hijacking, preventing attackers from redirecting a model's high-level objective...
Attack HIGH
Hao Li, Yankai Yang, G. Edward Suh +2 more
Large Language Models (LLMs) have enabled the development of powerful agentic systems capable of automating complex workflows across various fields....
2 months ago cs.CR cs.AI cs.CL
PDF
Attack HIGH
Oleg Brodt, Elad Feldman, Bruce Schneier +1 more
Prompt injection was initially framed as the large language model (LLM) analogue of SQL injection. However, over the past three years, attacks...
2 months ago cs.CR cs.AI
PDF
Attack HIGH
Zhiyi Mou, Jingyuan Yang, Zeheng Qian +6 more
While Large Language Models (LLMs) have powerful capabilities, they remain vulnerable to jailbreak attacks, which is a critical barrier to their safe...
Attack HIGH
Xiaonan Liu, Zhihao Li, Xiao Lan +3 more
Capture-the-Flag (CTF) competitions play a central role in modern cybersecurity as a platform for training practitioners and evaluating offensive and...
Attack HIGH
Fengchao Chen, Tingmin Wu, Van Nguyen +1 more
Large Language Models (LLMs) have enabled agents to move beyond conversation toward end-to-end task execution and become more helpful. However, this...
Benchmark HIGH
Shaznin Sultana, Sadia Afreen, Nasir U. Eisty
Context: Traditional software security analysis methods struggle to keep pace with the scale and complexity of modern codebases, requiring...
Attack HIGH
Mohammed Himayath Ali, Mohammed Aqib Abdullah, Mohammed Mudassir Uddin +1 more
Large Language Models have emerged as transformative tools for Security Operations Centers, enabling automated log analysis, phishing triage, and...
2 months ago cs.CR cs.CV
PDF
Attack HIGH
Xinyi Wu, Geng Hong, Yueyue Chen +5 more
Web agents, powered by large language models (LLMs), are increasingly deployed to automate complex web interactions. The rise of open-source...
2 months ago cs.CR cs.AI
PDF
Attack HIGH
Shawn Li, Chenxiao Yu, Zhiyu Ni +4 more
Large language models (LLMs) are increasingly deployed in security-sensitive applications, where they must follow system- or developer-specified...
2 months ago cs.CR cs.AI
PDF
Tool HIGH
Hongyan Chang, Ergute Bao, Xinjian Luo +1 more
Large language models (LLMs) increasingly rely on retrieving information from external corpora. This creates a new attack surface: indirect prompt...
2 months ago cs.CR cs.AI
PDF
Tool HIGH
Harshil Parmar, Pushti Vyas, Prayers Khristi +1 more
As vulnerability research increasingly adopts generative AI, a critical reliance on opaque model outputs has emerged, creating a "trust gap" in...
2 months ago cs.CR cs.AI cs.SE
PDF
Survey HIGH
Masahiro Kaneko
The use of large language models (LLMs) in peer review systems has attracted growing attention, making it essential to examine their potential...
2 months ago cs.CL cs.AI cs.LG
PDF
Attack HIGH
Muhammad Wahid Akram, Keshav Sood, Muneeb Ul Hassan +1 more
Phishing with Quick Response (QR) codes is termed as Quishing. The attackers exploit this method to manipulate individuals into revealing their...
2 months ago cs.CR cs.LG
PDF
Attack HIGH
Quan Minh Nguyen, Min-Seon Kim, Hoang M. Ngo +3 more
Membership inference attack (MIA) poses a significant privacy threat in federated learning (FL) as it allows adversaries to determine whether a...
2 months ago cs.LG cs.CR
PDF
Attack HIGH
Hongjun An, Yiliang Song, Jiangan Chen +3 more
Large Language Model (LLM) training often optimizes for preference alignment, rewarding outputs that are perceived as helpful and...
2 months ago cs.CR cs.AI
PDF
Attack HIGH
Víctor Mayoral-Vilches, María Sanz-Gómez, Francesco Balassone +6 more
AI-driven penetration testing now executes thousands of actions per hour but still lacks the strategic intuition humans apply in competitive...
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial