Attack HIGH
Deyue Zhang, Dongdong Yang, Junjie Mu +6 more
Multimodal large language models (MLLMs) exhibit remarkable capabilities but remain susceptible to jailbreak attacks exploiting cross-modal...
6 months ago cs.CR cs.AI
PDF
Tool HIGH
ChenYu Wu, Yi Wang, Yang Liao
Large language models (LLMs) are increasingly vulnerable to multi-turn jailbreak attacks, where adversaries iteratively elicit harmful behaviors that...
6 months ago cs.CR cs.AI
PDF
Tool HIGH
Zixuan Liu, Yi Zhao, Zhuotao Liu +4 more
Machine Learning (ML)-based malicious traffic detection is a promising security paradigm. It outperforms rule-based traditional detection by...
Benchmark HIGH
Bin Liu, Yanjie Zhao, Guoai Xu +1 more
Large language model (LLM) agents have demonstrated remarkable capabilities in software engineering and cybersecurity tasks, including code...
6 months ago cs.SE cs.CR
PDF
Attack HIGH
Evangelos Lamprou, Julian Dai, Grigoris Ntousakis +2 more
Software supply-chain attacks are an important and ongoing concern in the open source software ecosystem. These attacks maintain the standard...
Attack HIGH
Xiaoyu Xue, Yuni Lai, Chenxi Huang +4 more
The emergence of graph foundation models (GFMs), particularly those incorporating language models (LMs), has revolutionized graph learning and...
6 months ago cs.CR cs.AI
PDF
Attack HIGH
Yingguang Yang, Xianghua Zeng, Qi Wu +5 more
Social networks have become a crucial source of real-time information for individuals. The influence of social bots within these platforms has...
6 months ago cs.LG cs.AI cs.CR
PDF
Benchmark HIGH
Trilok Padhi, Pinxian Lu, Abdulkadir Erol +5 more
Large Language Model (LLM) agents are powering a growing share of interactive web applications, yet remain vulnerable to misuse and harm. Prior...
Attack HIGH
Abdulrahman Alhaidari, Balaji Palanisamy, Prashant Krishnamurthy
Billions of dollars are lost every year in DeFi platforms by transactions exploiting business logic or accounting vulnerabilities. Existing defenses...
6 months ago cs.CR cs.AI cs.DC
PDF
Attack HIGH
Wei Zou, Yupei Liu, Yanting Wang +3 more
LLM-integrated applications are vulnerable to prompt injection attacks, where an attacker contaminates the input to inject malicious instructions,...
6 months ago cs.CR cs.LG
PDF
Benchmark HIGH
Ivan Dubrovsky, Anastasia Orlova, Illarion Iov +3 more
Benchmarking outcomes increasingly govern trust, selection, and deployment of LLMs, yet these evaluations remain vulnerable to semantically...
Attack HIGH
Avihay Cohen
Large Language Model (LLM) based agents integrated into web browsers (often called agentic AI browsers) offer powerful automation of web tasks....
6 months ago cs.CR cs.AI
PDF
Attack HIGH
Baogang Song, Dongdong Zhao, Jianwen Xiang +2 more
Backdoor attacks pose a persistent security risk to deep neural networks (DNNs) due to their stealth and durability. While recent research has...
6 months ago cs.CR cs.AI
PDF
Attack HIGH
Tuan T. Nguyen, John Le, Thai T. Vu +2 more
Large language models (LLMs) achieve impressive performance across diverse tasks yet remain vulnerable to jailbreak attacks that bypass safety...
Survey HIGH
Francesco Giarrusso, Olga E. Sorokoletova, Vincenzo Suriani +1 more
Jailbreaking techniques pose a significant threat to the safety of Large Language Models (LLMs). Existing defenses typically focus on single-turn...
7 months ago cs.CL cs.AI
PDF
Attack HIGH
Yuqi Jia, Yupei Liu, Zedian Shao +2 more
Prompt injection attacks deceive a large language model into completing an attacker-specified task instead of its intended task by contaminating its...
7 months ago cs.CR cs.AI
PDF
Benchmark HIGH
Dongsen Zhang, Zekun Li, Xu Luo +3 more
The Model Context Protocol (MCP) standardizes how large language model (LLM) agents discover, describe, and call external tools. While MCP unlocks...
7 months ago cs.CR cs.AI
PDF
Attack HIGH
Bowen Fan, Zhilin Guo, Xunkai Li +5 more
Graph Neural Networks (GNNs) have become a pivotal framework for modeling graph-structured data, enabling a wide range of applications from social...
Attack HIGH
Xiaoxue Ren, Penghao Jiang, Kaixin Li +6 more
Web applications are prime targets for cyberattacks as gateways to critical services and sensitive data. Traditional penetration testing is costly...
7 months ago cs.CR cs.CL
PDF
Attack HIGH
Harsh Kasyap, Minghong Fang, Zhuqing Liu +2 more
Federated learning (FL) is a privacy-preserving machine learning technique that facilitates collaboration among participants across demographics. FL...
7 months ago cs.LG cs.CR
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial