Ben Kereopa-Yorke, Guillermo Diaz, Holly Wright +3 more
We define Oracle Poisoning, an attack class in which an adversary corrupts a structured knowledge graph that AI agents query at runtime via tool-use...
Retrieval-Augmented Generation (RAG) systems are vulnerable to knowledge base poisoning, yet existing attacks have been evaluated almost exclusively...
Yiwei Zhang, Jeremiah Birrell, Reza Ebrahimi +3 more
Large language models (LLMs) remain vulnerable to adversarial prompting despite advances in alignment and safety, often exhibiting harmful behaviors...
Large language models (LLMs) are known to be vulnerable to jailbreak attacks, which typically rely on carefully designed prompts containing explicit...
Divyam Anshumaan, Sarthak Choudhary, Nils Palumbo +1 more
LLM agents release private data across multi-service interactions. Existing prompt sanitizers based on metric differential privacy treat each release...
Purna Sai Garigipati, Onur Ayan, Kishor Chandra Joshi +1 more
Artificial Intelligence (AI) will play an essential role in 6G. It will fundamentally reshape the network architecture itself and drive major changes...
George Fatouros, Georgios Makridis, John Soldatos +18 more
European financial institutions face mounting regulatory pressure while their security operations centres remain constrained not by data or staffing...
Mixture-of-Experts (MoE) architectures in Large Language Models (LLMs) have significantly reduced inference costs through sparse activation. However,...