ILION: Deterministic Pre-Execution Safety Gates for Agentic AI Systems
Florin Adrian Chitan
The proliferation of autonomous AI agents capable of executing real-world actions - filesystem operations, API calls, database modifications,...
2,529+ academic papers on AI security, attacks, and defenses
Showing 101–120 of 272 papers
Clear filtersFlorin Adrian Chitan
The proliferation of autonomous AI agents capable of executing real-world actions - filesystem operations, API calls, database modifications,...
Emmanuel Bamidele
Long-running LLM agents require persistent memory to preserve state across interactions, yet most deployed systems manage memory with age-based...
Phan The Duy, Nghi Hoang Khoa, Nguyen Tran Anh Quan +3 more
The increasing deployment of Federated Learning (FL) in Intrusion Detection Systems (IDS) introduces new challenges related to data privacy,...
Leon Staufer, Kevin Feng, Kevin Wei +6 more
Agentic AI systems are increasingly capable of performing professional and personal tasks with limited human involvement. However, tracking these...
Arnold Cartagena, Ariane Teixeira
Large language models deployed as agents increasingly interact with external systems through tool calls--actions with real-world consequences that...
Doron Shavit
Jailbreak prompts are a practical and evolving threat to large language models (LLMs), particularly in agentic systems that execute tools over...
Yuepeng Hu, Yuqi Jia, Mengyuan Li +2 more
In a malicious tool attack, an attacker uploads a malicious tool to a distribution platform; once a user installs the tool and the LLM agent selects...
Hayfa Dhabhi, Kashyap Thimmaraju
Large Language Models (LLMs) deploy safety mechanisms to prevent harmful outputs, yet these defenses remain vulnerable to adversarial prompts. While...
Herman Errico
As artificial intelligence systems evolve from passive assistants into autonomous agents capable of executing consequential actions, the security...
Xiaoxu Peng, Dong Zhou, Jianwen Zhang +3 more
Vision Language Models (VLMs) have advanced perception in autonomous driving (AD), but they remain vulnerable to adversarial threats. These risks...
Tianyi Wang, Huawei Fan, Yuanchao Shu +2 more
Large Language Models face an emerging and critical threat known as latency attacks. Because LLM inference is inherently expensive, even modest...
Juefei Pu, Xingyu Li, Zhengchuan Liang +5 more
Autonomous large language model (LLM) based systems have recently shown promising results across a range of cybersecurity tasks. However, there is no...
Saad Hossain, Tom Tseng, Punya Syon Pandey +8 more
As increasingly capable open-weight large language models (LLMs) are deployed, improving their tamper resistance against unsafe modifications,...
Guowei Guan, Yurong Hao, Jiaming Zhang +6 more
Multimodal large language models (MLLMs) are pushing recommender systems (RecSys) toward content-grounded retrieval and ranking via cross-modal...
Guangwei Zhang, Jianing Zhu, Cheng Qian +12 more
We present Copyright Detective, the first interactive forensic system for detecting, analyzing, and visualizing potential copyright risks in LLM...
Gautam Savaliya, Robert Aufschläger, Abhishek Subedi +2 more
Artificial intelligence systems introduce complex privacy risks throughout their lifecycle, especially when processing sensitive or high-dimensional...
Jiaqi Gao, Zijian Zhang, Yuqiang Sun +5 more
Business logic vulnerabilities have become one of the most damaging yet least understood classes of smart contract vulnerabilities. Unlike...
Alsharif Abuadbba, Nazatul Sultan, Surya Nepal +1 more
AI is moving from domain-specific autonomy in closed, predictable settings to large-language-model-driven agents that plan and act in open,...
Zehua Cheng, Jianwei Yang, Wei Dai +1 more
Large Language Models (LLMs) remain vulnerable to adaptive jailbreaks that easily bypass empirical defenses like GCG. We propose a framework for...
Weizhe Tang, Junwei You, Jiaxi Liu +5 more
End-to-end autonomous driving models increasingly benefit from large vision--language models for semantic understanding, yet ensuring safe and...
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act), and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial