Caging the Agents: A Zero Trust Security Architecture for Autonomous AI in Healthcare
Saikat Maiti
Autonomous AI agents powered by large language models are being deployed in production with capabilities including shell execution, file system...
2,560+ academic papers on AI security, attacks, and defenses
Showing 261–280 of 932 papers
Clear filtersSaikat Maiti
Autonomous AI agents powered by large language models are being deployed in production with capabilities including shell execution, file system...
Zichen Tang, Zirui Zhang, Qian Wang +3 more
Current Large Language Models (LLMs) are gradually exploited in practically valuable agentic workflows such as Deep Research, E-commerce...
Zichen Tang, Zirui Zhang, Qian Wang +3 more
Current Large Language Models (LLMs) are gradually exploited in practically valuable agentic workflows such as Deep Research, E-commerce...
Yi Ting Shen, Kentaroh Toyoda, Alex Leung
The Model Context Protocol (MCP) introduces a structurally distinct attack surface that existing threat frameworks, designed for traditional software...
Abhijeet Sahu, Shuva Paul, Richard Macwan
Cyber deception assists in increasing the attacker's budget in reconnaissance or any early phases of threat intrusions. In the past, numerous methods...
Patrick Levi
Retrieval augmented generation systems have become an integral part of everyday life. Whether in internet search engines, email systems, or service...
Taiwo Onitiju, Iman Vakilinia
Large Language Models increasingly power critical infrastructure from healthcare to finance, yet their vulnerability to adversarial manipulation...
Kushankur Ghosh, Mehar Klair, Kian Kyars +2 more
Provenance graphs model causal system-level interactions from logs, enabling anomaly detectors to learn normal behavior and detect deviations as...
Caglar Yildirim
Large language models (LLMs) are increasingly deployed as tool-using agents, shifting safety concerns from harmful text generation to harmful task...
Gengxin Sun, Ruihao Yu, Liangyi Yin +3 more
Ensuring robust and fair interview assessment remains a key challenge in AI-driven evaluation. This paper presents CoMAI, a general-purpose...
Zhouwei Zhai, Mengxiang Chen, Anmeng Zhang
Large language models offer transformative potential for e-commerce search by enabling intent-aware recommendations. However, their industrial...
Amira Guesmi, Muhammad Shafique
Vision-language models (VLMs) have recently shown remarkable capabilities in visual understanding and generation, but remain vulnerable to...
Ce Zhang, Jinxi He, Junyi He +2 more
Multi-modal Large Language Models (MLLMs) have achieved remarkable performance across a wide range of visual reasoning tasks, yet their vulnerability...
Zhenheng Tang, Xiang Liu, Qian Wang +3 more
As Large Language Models (LLMs) become more powerful and autonomous, they increasingly face conflicts and dilemmas in many scenarios. We first...
Vanshaj Khattar, Md Rafi ur Rashid, Moumita Choudhury +4 more
Test-time training (TTT) has recently emerged as a promising method to improve the reasoning abilities of large language models (LLMs), in which the...
Kai Wang, Biaojie Zeng, Zeming Wei +7 more
With the rapid development of LLM-based multi-agent systems (MAS), their significant safety and security concerns have emerged, which introduce novel...
Yu Pan, Wenlong Yu, Tiejun Wu +4 more
Large language models (LLMs) have demonstrated remarkable capabilities in complex reasoning tasks. However, they remain highly susceptible to...
Ye Wang, Jing Liu, Toshiaki Koike-Akino
The safety and reliability of vision-language models (VLMs) are a crucial part of deploying trustworthy agentic AI systems. However, VLMs remain...
Yuhuan Liu, Haitian Zhong, Xinyuan Xia +3 more
Large Language Models (LLMs) often suffer from catastrophic forgetting and collapse during sequential knowledge editing. This vulnerability stems...
Jinhu Qi, Yifan Li, Minghao Zhao +4 more
As agentic AI systems move beyond static question answering into open-ended, tool-augmented, and multi-step real-world workflows, their increased...
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act), and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial