Graph neural network (GNN) is a powerful tool for analyzing graph-structured data. However, their vulnerability to adversarial attacks raises serious...
Membership inference attacks (MIAs), which enable adversaries to determine whether specific data points were part of a model's training dataset, have...
Aravind Krishnan, Karolina Stańczak, Dietrich Klakow
As Spoken Language Models (SLMs) integrate speech and text modalities, they inherit the safety vulnerabilities of their LLM backbone and an expanded...
Dimitris Mitropoulos, Nikolaos Alexopoulos, Georgios Alexopoulos +1 more
Security code reviews increasingly rely on systems integrating Large Language Models (LLMs), ranging from interactive assistants to autonomous agents...
Md Takrim Ul Alam, Akif Islam, Mohd Ruhul Ameen +2 more
Large language models (LLMs) deployed behind APIs and retrieval-augmented generation (RAG) stacks are vulnerable to prompt injection attacks that may...
Hammad Atta, Ken Huang, Kyriakos Rock Lambros +11 more
Agentic LLM systems equipped with persistent memory, RAG pipelines, and external tool connectors face a class of attacks - Logic-layer Prompt Control...
Code generation large language models (LLMs) are increasingly integrated into modern software development workflows. Recent work has shown that these...