Systems Security Foundations for Agentic Computing
Abstract
In recent years, agentic artificial intelligence (AI) systems are becoming increasingly widespread. These systems allow agents to use various tools, such as web browsers, compilers, and more. However, despite their popularity, agentic AI systems also introduce a myriad of security concerns, due to their constant interaction with third-party servers. For example, a malicious adversary can cause data exfiltration by executing prompt injection attacks, as well as other unwarranted behavior. These security concerns have recently motivated researchers to improve the safety and reliability of agentic systems. However, most of the literature on this topic is from the AI standpoint and lacks the system-security perspective and guarantees. In this work, we begin bridging this gap and present an analysis through the lens of classic cybersecurity research. Specifically, motivated by decades of progress in this domain, we identify short- and long-term research problems in agentic AI safety by examining end-to-end security properties of entire systems, rather than standalone AI models running in isolation. Our key goal is to examine where research challenges arise when applying traditional security principles in the context of AI agents and, as a secondary goal, distill these ideas for AI practitioners. Furthermore, we extensively cover 11 case studies of real-world attacks on agentic systems, as well as define a series of new research problems that are specific to this important domain.
Pro Analysis
Full threat analysis, ATLAS technique mapping, compliance impact assessment (ISO 42001, EU AI Act), and actionable recommendations are available with a Pro subscription.