Tool MEDIUM
Rishi Jha, Harold Triedman, Justin Wagle +1 more
Control-flow hijacking attacks manipulate orchestration mechanisms in multi-agent systems into performing unsafe actions that compromise the system...
6 months ago cs.LG cs.CR eess.SY
PDF
Attack HIGH
Giulia Giusti
The concept of linearity plays a central role in both mathematics and computer science, with distinct yet complementary meanings. In mathematics,...
6 months ago cs.CR cs.LO cs.PL
PDF
Defense MEDIUM
Runlin Lei, Lu Yi, Mingguo He +4 more
While Graph Neural Networks (GNNs) and Large Language Models (LLMs) are powerful approaches for learning on Text-Attributed Graphs (TAGs), a...
Defense HIGH
Tenghui Huang, Jinbo Wen, Jiawen Kang +8 more
Smart contracts play a significant role in automating blockchain services. Nevertheless, vulnerabilities in smart contracts pose serious threats to...
6 months ago cs.CR cs.AI
PDF
Attack MEDIUM
Elias Hossain, Swayamjit Saha, Somshubhra Roy +1 more
Even when prompts and parameters are secured, transformer language models remain vulnerable because their key-value (KV) cache during inference...
6 months ago cs.CR cs.AI
PDF
Defense MEDIUM
Qiusi Zhan, Angeline Budiman-Chan, Abdelrahman Zayed +3 more
Large language model (LLM) based search agents iteratively generate queries, retrieve external information, and reason to answer open-domain...
Defense MEDIUM
Qiusi Zhan, Angeline Budiman-Chan, Abdelrahman Zayed +3 more
Large language model (LLM) based search agents iteratively generate queries, retrieve external information, and reason to answer open-domain...
Attack HIGH
Masahiro Kaneko, Zeerak Talat, Timothy Baldwin
Iterative jailbreak methods that repeatedly rewrite and input prompts into large language models (LLMs) to induce harmful outputs -- using the...
Attack HIGH
Masahiro Kaneko, Timothy Baldwin
Adversarial attacks by malicious users that threaten the safety of large language models (LLMs) can be viewed as attempts to infer a target property...
6 months ago cs.CR cs.CL cs.LG
PDF
Attack HIGH
Mansi Phute, Matthew Hull, Haoran Wang +6 more
Deep learning models deployed in safety critical applications like autonomous driving use simulations to test their robustness against adversarial...
6 months ago cs.CR cs.AI cs.LG
PDF
Defense MEDIUM
Bo-Han Feng, Chien-Feng Liu, Yu-Hsuan Li Liang +9 more
Large audio-language models (LALMs) extend text-based LLMs with auditory understanding, offering new opportunities for multimodal applications. While...
6 months ago cs.SD cs.AI cs.CL
PDF
Benchmark LOW
Navreet Kaur, Hoda Ayad, Hayoung Jung +3 more
Language model users often embed personal and social context in their questions. The asker's role -- implicit in how the question is framed --...
6 months ago cs.CL cs.AI cs.CY
PDF
Tool MEDIUM
Yue Liu, Zhenchang Xing, Shidong Pan +1 more
In recent years, the AI wave has grown rapidly in software development. Even novice developers can now design and generate complex...
6 months ago cs.SE cs.CR
PDF
Attack HIGH
Amirkia Rafiei Oskooei, Mehmet S. Aktas
The proficiency of Large Language Models (LLMs) in processing structured data and adhering to syntactic rules is a capability that drives their...
6 months ago cs.CR cs.AI cs.CL
PDF
Attack MEDIUM
Jie Zhang, Meng Ding, Yang Liu +2 more
We present a novel approach for attacking black-box large language models (LLMs) by exploiting their ability to express confidence in natural...
6 months ago cs.CR cs.LG
PDF
Attack MEDIUM
Asmita Mohanty, Gezheng Kang, Lei Gao +1 more
Large Language Models (LLMs) have demonstrated strong performance across diverse tasks, but fine-tuning them typically relies on cloud-based,...
6 months ago cs.CR cs.LG
PDF
Benchmark MEDIUM
Shivam Ratnakar, Sanjay Raghavendra
Integration of Large Language Models with search/retrieval engines has become ubiquitous, yet these systems harbor a critical vulnerability that...
6 months ago cs.CL cs.AI
PDF
Attack HIGH
Alireza Heshmati, Saman Soleimani Roudi, Sajjad Amini +2 more
Existing adversarial attacks often neglect perturbation sparsity, limiting their ability to model structural changes and to explain how deep neural...
6 months ago cs.CR cs.LG eess.IV
PDF
Defense HIGH
Yiyang Huang, Liang Shi, Yitian Zhang +2 more
Large Vision-Language Models (LVLMs) excel in diverse cross-modal tasks. However, object hallucination, where models produce plausible but inaccurate...
6 months ago cs.CV cs.AI
PDF
Tool MEDIUM
Xiaofan Li, Xing Gao
The Model Context Protocol (MCP) is an emerging open standard that enables AI-powered applications to interact with external tools through structured...
6 months ago cs.CR cs.AI
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial