Joseph G. Zalameda, Megan A. Witherow, Alexander M. Glandon +2 more
Machine learning models trained on small data sets for security applications are especially vulnerable to adversarial attacks. Person identification...
Recent advances in the Model Context Protocol (MCP) have enabled large language models (LLMs) to invoke external tools with unprecedented ease. This...
Yasamin Medghalchi, Milad Yazdani, Amirhossein Dabiriaghdam +7 more
Ultrasound is widely used in clinical practice due to its portability, cost-effectiveness, safety, and real-time imaging capabilities. However, image...
Graph neural network (GNN) is a powerful tool for analyzing graph-structured data. However, their vulnerability to adversarial attacks raises serious...
Membership inference attacks (MIAs), which enable adversaries to determine whether specific data points were part of a model's training dataset, have...
Aravind Krishnan, Karolina Stańczak, Dietrich Klakow
As Spoken Language Models (SLMs) integrate speech and text modalities, they inherit the safety vulnerabilities of their LLM backbone and an expanded...
Hammad Atta, Ken Huang, Kyriakos Rock Lambros +11 more
Agentic LLM systems equipped with persistent memory, RAG pipelines, and external tool connectors face a class of attacks - Logic-layer Prompt Control...
Code generation large language models (LLMs) are increasingly integrated into modern software development workflows. Recent work has shown that these...