Tool MEDIUM
Zixun Xiong, Gaoyi Wu, Lingfeng Yao +3 more
Communication topology is a critical factor in the utility and safety of LLM-based multi-agent systems (LLM-MAS), making it a high-value intellectual...
2 months ago cs.CR cs.AI
PDF
Survey HIGH
Fabrizio Dimino, Bhaskarjit Sarmah, Stefano Pasquali
The rapid adoption of large language models (LLMs) in financial services introduces new operational, regulatory, and security risks. Yet most...
2 months ago q-fin.CP cs.AI cs.CY
PDF
Defense LOW
Ali Eslami, Jiangbo Yu
This paper develops a control-theoretic framework for analyzing agentic systems embedded within feedback control loops, where an AI agent may adapt...
Tool HIGH
Yu He, Haozhe Zhu, Yiming Li +4 more
LLM agents are highly vulnerable to Indirect Prompt Injection (IPI), where adversaries embed malicious directives in untrusted tool outputs to hijack...
Tool MEDIUM
Panagiotis Georgios Pennas, Konstantinos Papaioannou, Marco Guarnieri +1 more
Large Language Models (LLMs) rely on optimizations like Automatic Prefix Caching (APC) to accelerate inference. APC works by reusing previously...
2 months ago cs.CR cs.DC cs.LG
PDF
Benchmark MEDIUM
Chuan Guo, Juan Felipe Ceron Uribe, Sicheng Zhu +10 more
Instruction hierarchy (IH) defines how LLMs prioritize system, developer, user, and tool instructions under conflict, providing a concrete,...
2 months ago cs.AI cs.CL cs.CR
PDF
Attack HIGH
Nasim Soltani, Shayan Nejadshamsi, Zakaria Abou El Houda +4 more
Adversarial examples can represent a serious threat to machine learning (ML) algorithms. If used to manipulate the behaviour of ML-based Network...
2 months ago cs.CR cs.AI
PDF
Benchmark MEDIUM
Manit Baser, Alperen Yildiz, Dinil Mon Divakaran +1 more
The static knowledge representations of large language models (LLMs) inevitably become outdated or incorrect over time. While model-editing...
Tool MEDIUM
Zhengyang Shan, Jiayun Xin, Yue Zhang +1 more
Code agents powered by large language models can execute shell commands on behalf of users, introducing severe security vulnerabilities. This paper...
Attack HIGH
Scott Thornton
Retrieval-Augmented Generation (RAG) systems extend large language models (LLMs) with external knowledge sources but introduce new attack surfaces...
2 months ago cs.CR cs.AI cs.LG
PDF
Tool MEDIUM
Shriti Priya, Julian James Stephen, Arjun Natarajan
Enterprises and organizations today increasingly deploy in-house, cloud based applications and APIs for internal operations or external customers....
Attack MEDIUM
Pratyay Kumar, Miguel Antonio Guirao Aguilera, Srikathyayani Srikanteswara +2 more
Model Context Protocol (MCP) servers have rapidly emerged over the past year as a widely adopted way to enable Large Language Model (LLM) agents to...
2 months ago cs.CR cs.AI
PDF
Attack HIGH
Nanzi Yang, Weiheng Bai, Kangjie Lu
The Model Context Protocol (MCP) is a recently proposed interoperability standard that unifies how AI agents connect with external tools and data...
2 months ago cs.CR cs.AI
PDF
Tool LOW
Eeham Khan, Luis Rodriguez, Marc Queudot
Retrieval-Augmented Generation (RAG) significantly improves the factuality of Large Language Models (LLMs), yet standard pipelines often lack...
Attack HIGH
Ailiya Borjigin, Igor Stadnyk, Ben Bilski +2 more
OpenClaw-style agent stacks turn language into privileged execution: LLM intents flow through tool interception, policy gates, and a local executor....
2 months ago cs.CR cs.AI
PDF
Attack HIGH
Fan Yang
The widespread adoption of thinking mode in large language models (LLMs) has significantly enhanced complex task processing capabilities while...
2 months ago cs.CR cs.AI
PDF
Attack MEDIUM
Meenatchi Sundaram Muthu Selva Annamalai, Emiliano De Cristofaro, Peter Kairouz
As AI assistants become widely used, privacy-aware platforms like Anthropic's Clio have been introduced to generate insights from real-world AI use....
Attack MEDIUM
Jia Hu, Youcheng Sun, Pierre Olivier
Software compartmentalization breaks down an application into compartments isolated from each other: an attacker taking over a compartment will be...
Attack MEDIUM
Ali Raza, Gurang Gupta, Nikolay Matyunin +1 more
Warning: This article includes red-teaming experiments, which contain examples of compromised LLM responses that may be offensive or upsetting. Large...
2 months ago cs.CR cs.AI cs.LG
PDF
Attack HIGH
Quanchen Zou, Moyang Chen, Zonghao Ying +6 more
Large Vision-Language Models (LVLMs) undergo safety alignment to suppress harmful content. However, current defenses predominantly target explicit...
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial