Tool HIGH
Xu Liu, Yan Chen, Kan Ling +4 more
The widespread deployment of Large Language Models (LLMs) as public-facing web services and APIs has made their security a core concern for the web...
6 months ago cs.CR cs.LG
PDF
Tool LOW
Congcong Chen, Xinyu Liu, Kaifeng Huang +2 more
Graph Neural Networks (GNNs) have marked significant impact in traffic state prediction, social recommendation, knowledge-aware question answering...
6 months ago cs.CR cs.LG
PDF
Tool HIGH
Minseok Kim, Hankook Lee, Hyungjoon Koo
Large language models (LLMs) are reshaping numerous facets of our daily lives, leading widespread adoption as web-based services. Despite their...
6 months ago cs.CR cs.AI cs.IR
PDF
Tool LOW
Dong Chen, Yanzhe Wei, Zonglin He +7 more
Large language models (LLMs) offer transformative potential for clinical decision support in spine surgery but pose significant risks through...
6 months ago cs.LG cs.AI cs.CY
PDF
Tool HIGH
Seif Ikbarieh, Maanak Gupta, Elmahedi Mahalal
The Internet of Things has expanded rapidly, transforming communication and operations across industries but also increasing the attack surface and...
6 months ago cs.CR cs.AI
PDF
Tool MEDIUM
Ken Huang, Kyriakos Rock Lambros, Jerry Huang +8 more
This paper introduces the Agentic AI Governance Assurance & Trust Engine (AAGATE), a Kubernetes-native control plane designed to address the unique...
6 months ago cs.CR cs.AI
PDF
Tool HIGH
Md. Mehedi Hasan, Ziaur Rahman, Rafid Mostafiz +1 more
This paper presents a real-time modular defense system named Sentra-Guard. The system detects and mitigates jailbreak and prompt injection attacks...
6 months ago cs.CR cs.AI
PDF
Tool MEDIUM
Adetayo Adebimpe, Helmut Neukirchen, Thomas Welsh
Honeypots are decoy systems used for gathering valuable threat intelligence or diverting attackers away from production systems. Maximising attacker...
6 months ago cs.CR cs.CL cs.LG
PDF
Tool MEDIUM
Li An, Yujian Liu, Yepeng Liu +3 more
Watermarking has emerged as a promising solution for tracing and authenticating text generated by large language models (LLMs). A common approach to...
Tool MEDIUM
Alyssa Gerhart, Balaji Iyangar
Adversarial attacks pose a severe risk to AI systems used in healthcare, capable of misleading models into dangerous misclassifications that can...
6 months ago cs.LG cs.CR
PDF
Tool LOW
Xin Lian, Kenneth D. Forbus
Despite the broad applicability of large language models (LLMs), their reliance on probabilistic inference makes them vulnerable to errors such as...
6 months ago cs.CL cs.AI
PDF
Tool MEDIUM
Zhonghao Zhan, Amir Al Sadi, Krinos Li +1 more
In this work, we study security of Model Context Protocol (MCP) agent toolchains and their applications in smart homes. We introduce AegisMCP, a...
Tool MEDIUM
Thomas Wang, Haowen Li
As large language models (LLMs) are increasingly integrated into real-world applications, ensuring their safety, robustness, and privacy compliance...
6 months ago cs.CR cs.CL
PDF
Tool HIGH
Sidhant Narula, Javad Rafiei Asl, Mohammad Ghasemigol +2 more
Large Language Models (LLMs) remain vulnerable to multi-turn jailbreak attacks. We introduce HarmNet, a modular framework comprising ThoughtNet, a...
6 months ago cs.CR cs.AI
PDF
Tool HIGH
Zijie Xu, Minfeng Qi, Shiqing Wu +4 more
Multi-agent systems powered by large language models are advancing rapidly, yet the tension between mutual trust and security remains underexplored....
Tool HIGH
Qilin Liao, Anamika Lochab, Ruqi Zhang
Vision-Language Models (VLMs) extend large language models with visual reasoning, but their multimodal design also introduces new, underexplored...
6 months ago cs.CR cs.CL cs.CV
PDF
Tool MEDIUM
Rishi Jha, Harold Triedman, Justin Wagle +1 more
Control-flow hijacking attacks manipulate orchestration mechanisms in multi-agent systems into performing unsafe actions that compromise the system...
6 months ago cs.LG cs.CR eess.SY
PDF
Tool MEDIUM
Yue Liu, Zhenchang Xing, Shidong Pan +1 more
In recent years, the AI wave has grown rapidly in software development. Even novice developers can now design and generate complex...
6 months ago cs.SE cs.CR
PDF
Tool MEDIUM
Xiaofan Li, Xing Gao
The Model Context Protocol (MCP) is an emerging open standard that enables AI-powered applications to interact with external tools through structured...
6 months ago cs.CR cs.AI
PDF
Tool HIGH
Kate Glazko, Jennifer Mankoff
Generative AI risks such as bias and lack of representation impact people who do not interact directly with GAI systems, but whose content does:...
6 months ago cs.CR cs.CY
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial