Attack HIGH
Baogang Song, Dongdong Zhao, Jianwen Xiang +2 more
Backdoor attacks pose a persistent security risk to deep neural networks (DNNs) due to their stealth and durability. While recent research has...
5 months ago cs.CR cs.AI
PDF
Attack HIGH
Tuan T. Nguyen, John Le, Thai T. Vu +2 more
Large language models (LLMs) achieve impressive performance across diverse tasks yet remain vulnerable to jailbreak attacks that bypass safety...
Survey HIGH
Francesco Giarrusso, Olga E. Sorokoletova, Vincenzo Suriani +1 more
Jailbreaking techniques pose a significant threat to the safety of Large Language Models (LLMs). Existing defenses typically focus on single-turn...
5 months ago cs.CL cs.AI
PDF
Attack HIGH
Yuqi Jia, Yupei Liu, Zedian Shao +2 more
Prompt injection attacks deceive a large language model into completing an attacker-specified task instead of its intended task by contaminating its...
5 months ago cs.CR cs.AI
PDF
Benchmark HIGH
Dongsen Zhang, Zekun Li, Xu Luo +3 more
The Model Context Protocol (MCP) standardizes how large language model (LLM) agents discover, describe, and call external tools. While MCP unlocks...
5 months ago cs.CR cs.AI
PDF
Attack HIGH
Bowen Fan, Zhilin Guo, Xunkai Li +5 more
Graph Neural Networks (GNNs) have become a pivotal framework for modeling graph-structured data, enabling a wide range of applications from social...
Attack HIGH
Xiaoxue Ren, Penghao Jiang, Kaixin Li +6 more
Web applications are prime targets for cyberattacks as gateways to critical services and sensitive data. Traditional penetration testing is costly...
5 months ago cs.CR cs.CL
PDF
Attack HIGH
Harsh Kasyap, Minghong Fang, Zhuqing Liu +2 more
Federated learning (FL) is a privacy-preserving machine learning technique that facilitates collaboration among participants across demographics. FL...
5 months ago cs.LG cs.CR
PDF
Tool HIGH
Caelin Kaplan, Alexander Warnecke, Neil Archibald
AI models are being increasingly integrated into real-world systems, raising significant concerns about their safety and security. Consequently, AI...
5 months ago cs.CR cs.AI
PDF
Tool HIGH
Zicheng Liu, Lige Huang, Jie Zhang +3 more
The increasing autonomy of Large Language Models (LLMs) necessitates a rigorous evaluation of their potential to aid in cyber offense. Existing...
5 months ago cs.CR cs.AI
PDF
Attack HIGH
Ting Li, Yang Yang, Yipeng Yu +3 more
Adversarial attacks on knowledge graph embeddings (KGE) aim to disrupt the model's ability of link prediction by removing or inserting triples. A...
5 months ago cs.CL cs.CR
PDF
Tool HIGH
Pengyu Zhu, Lijun Li, Yaxing Lyu +3 more
LLM-based multi-agent systems (MAS) demonstrate increasing integration into next-generation applications, but their safety in backdoor attacks...
Attack HIGH
Michael Schlichtkrull
When AI agents retrieve and reason over external documents, adversaries can manipulate the data they receive to subvert their behaviour. Previous...
5 months ago cs.CL cs.AI
PDF
Attack HIGH
Vasilije Stambolic, Aritra Dhar, Lukas Cavigelli
Retrieval-Augmented Generation (RAG) increases the reliability and trustworthiness of the LLM response and reduces hallucination by eliminating the...
5 months ago cs.CR cs.AI
PDF
Tool HIGH
Hyeseon An, Shinwoo Park, Suyeon Woo +1 more
The promise of LLM watermarking rests on a core assumption that a specific watermark proves authorship by a specific model. We demonstrate that this...
5 months ago cs.CR cs.AI
PDF
Attack HIGH
Zonghuan Xu, Jiayu Li, Yunhan Zhao +3 more
Vision-Language-Action (VLA) models map multimodal perception and language instructions to executable robot actions, making them particularly...
5 months ago cs.CR cs.AI cs.RO
PDF
Attack HIGH
Ming Tan, Wei Li, Hu Tao +4 more
Open-source large language models (LLMs) have demonstrated considerable dominance over proprietary LLMs in resolving neural processing tasks, thanks...
5 months ago cs.CR cs.AI
PDF
Attack HIGH
Guan-Yan Yang, Tzu-Yu Cheng, Ya-Wen Teng +2 more
The integration of Large Language Models (LLMs) into computer applications has introduced transformative capabilities but also significant security...
5 months ago cs.CR cs.AI cs.CL
PDF
Attack HIGH
Wentian Zhu, Zhen Xiang, Wei Niu +1 more
Unlike regular tokens derived from existing text corpora, special tokens are artificially created to annotate structured conversations during the...
5 months ago cs.CR cs.AI
PDF
Attack HIGH
Yutao Wu, Xiao Liu, Yinghui Li +5 more
Knowledge poisoning poses a critical threat to Retrieval-Augmented Generation (RAG) systems by injecting adversarial content into knowledge bases,...
5 months ago cs.CL cs.AI cs.CR
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial