Before the Tool Call: Deterministic Pre-Action Authorization for Autonomous AI Agents
Uchi Uchibeke
AI agents today have passwords but no permission slips. They execute tool calls (fund transfers, database queries, shell commands, sub-agent...
2,077+ academic papers on AI security, attacks, and defenses
Showing 1–20 of 115 papers
Clear filtersUchi Uchibeke
AI agents today have passwords but no permission slips. They execute tool calls (fund transfers, database queries, shell commands, sub-agent...
Vincent Siu, Jingxuan He, Kyle Montgomery +4 more
Security in LLM agents is inherently contextual. For example, the same action taken by an agent may represent legitimate behavior or a security...
Taiwo Onitiju, Iman Vakilinia
Large Language Models increasingly power critical infrastructure from healthcare to finance, yet their vulnerability to adversarial manipulation...
Zhouwei Zhai, Mengxiang Chen, Anmeng Zhang
Large language models offer transformative potential for e-commerce search by enabling intent-aware recommendations. However, their industrial...
Zhuoshang Wang, Yubing Ren, Yanan Cao +3 more
While watermarking serves as a critical mechanism for LLM provenance, existing secret-key schemes tightly couple detection with injection, requiring...
Ziling Zhou
AI agents dynamically acquire capabilities at runtime via MCP and A2A, yet no framework detects when capabilities change post-authorization. We term...
Ziling Zhou
AI agents dynamically acquire tools, orchestrate sub-agents, and transact across organizational boundaries, yet no existing security layer verifies...
Jiangrong Wu, Zitong Yao, Yuhong Nan +1 more
Tool-augmented LLM agents increasingly rely on multi-step, multi-tool workflows to complete real tasks. This design expands the attack surface,...
Frank Li
Tool-augmented LLM agents introduce security risks that extend beyond user-input filtering, including indirect prompt injection through fetched...
Zixun Xiong, Gaoyi Wu, Lingfeng Yao +3 more
Communication topology is a critical factor in the utility and safety of LLM-based multi-agent systems (LLM-MAS), making it a high-value intellectual...
Panagiotis Georgios Pennas, Konstantinos Papaioannou, Marco Guarnieri +1 more
Large Language Models (LLMs) rely on optimizations like Automatic Prefix Caching (APC) to accelerate inference. APC works by reusing previously...
Zhengyang Shan, Jiayun Xin, Yue Zhang +1 more
Code agents powered by large language models can execute shell commands on behalf of users, introducing severe security vulnerabilities. This paper...
Shriti Priya, Julian James Stephen, Arjun Natarajan
Enterprises and organizations today increasingly deploy in-house, cloud based applications and APIs for internal operations or external customers....
Yinpeng Wu, Yitong Chen, Lixiang Wang +3 more
Device-side Large Language Models (LLMs) have witnessed explosive growth, offering higher privacy and availability compared to cloud-side LLMs....
Yuhang Huang, Boyang Ma, Biwei Yan +5 more
The Model Context Protocol (MCP) is an open and standardized interface that enables large language models (LLMs) to interact with external tools and...
Neha Nagaraja, Hayretdin Bahsi
Large Language Models (LLMs) are increasingly integrated into safety-critical workflows, yet existing security analyses remain fragmented and often...
Punyajoy Saha, Sudipta Halder, Debjyoti Mondal +1 more
Safety alignment is critical for deploying large language models (LLMs) in real-world applications, yet most existing approaches rely on large...
Arther Tian, Alex Ding, Frank Chen +2 more
Decentralized large language model (LLM) inference networks can pool heterogeneous compute to scale serving, but they require lightweight and...
Neha Nagaraja, Hayretdin Bahsi
While incorporating LLMs into systems offers significant benefits in critical application areas such as healthcare, new security challenges emerge...
Romina Omidi, Yun Dong, Binghui Wang
Google's SynthID-Text, the first ever production-ready generative watermark system for large language model, designs a novel Tournament-based method...
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act), and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial