Paper 2511.12423v1

GRAPHTEXTACK: A Realistic Black-Box Node Injection Attack on LLM-Enhanced GNNs

Recent work integrates Large Language Models (LLMs) with Graph Neural Networks (GNNs) to jointly model semantics and structure, resulting in more general and expressive models that achieve state

high relevance attack
Paper 2512.14448v1

Reasoning-Style Poisoning of LLM Agents via Stealthy Style Transfer: Process-Level Attacks and Runtime Monitoring in RSV Space

Large Language Model (LLM) agents relying on external retrieval are increasingly deployed in high-stakes environments. While existing adversarial attacks primarily focus on content falsification or instruction injection, we identify

high relevance attack
Paper 2603.21642v1

Are AI-assisted Development Tools Immune to Prompt Injection?

development tools built on the Model Context Protocol (MCP). However, their convenience comes with security risks, especially prompt-injection attacks delivered via tool-poisoning vectors. While prior research has studied

high relevance tool
Paper 2510.13842v1

ADMIT: Few-shot Knowledge Poisoning Attacks on RAG-based Fact Checking

Knowledge poisoning poses a critical threat to Retrieval-Augmented Generation (RAG) systems by injecting adversarial content into knowledge bases, tricking Large Language Models (LLMs) into producing attacker-controlled outputs grounded

high relevance attack
Paper 2510.12143v1

Fairness-Constrained Optimization Attack in Federated Learning

demographics. FL enables model sharing, while restricting the movement of data. Since FL provides participants with independence over their training data, it becomes susceptible to poisoning attacks. Such collaboration also

high relevance attack
Paper 2602.22427v2

Adversarial Hubness Detector: Detecting Hubness Poisoning in Retrieval-Augmented Generation Systems

Retrieval-Augmented Generation (RAG) systems are essential to contemporary AI

medium relevance attack
Paper 2511.14074v1

Dynamic Black-box Backdoor Attacks on IoT Sensory Data

measurements can be fed to a machine learning-based model to train and classify human activities. While deep learning-based models have proven successful in classifying human activity and gestures

high relevance attack
Paper 2601.05293v1

A Survey of Agentic AI and Cybersecurity: Challenges, Opportunities and Use-case Prototypes

survey emerging threat models, security frameworks, and evaluation pipelines tailored to agentic systems, and analyze systemic risks including agent collusion, cascading failures, oversight evasion, and memory poisoning. Finally, we present

medium relevance survey
Paper 2509.26584v1

Fairness Testing in Retrieval-Augmented Generation: How Small Perturbations Reveal Bias in Small Language Models

Large Language Models (LLMs) are widely used across multiple domains but continue to raise concerns regarding security and fairness. Beyond known attack vectors such as data poisoning and prompt injection

medium relevance benchmark
Paper 2603.00172v1

Hidden in the Metadata: Stealth Poisoning Attacks on Multimodal Retrieval-Augmented Generation

augmented generation (RAG) has emerged as a powerful paradigm for enhancing multimodal large language models by grounding their responses in external, factual knowledge and thus mitigating hallucinations. However, the integration

high relevance attack
Paper 2511.17671v1

MURMUR: Using cross-user chatter to break collaborative language agents in groups

today's language models lack a mechanism for isolating user interactions and concurrent tasks, creating a new attack vector inherent to this new setting: cross-user poisoning

medium relevance attack
Paper 2603.20357v1

Memory poisoning and secure multi-agent systems

Memory poisoning attacks for Agentic AI and multi-agent systems (MAS) have recently caught attention. It is partially due to the fact that Large Language Models (LLMs) facilitate the construction

medium relevance attack
Paper 2509.24408v2

FuncPoison: Poisoning Function Library to Hijack Multi-agent Autonomous Driving Systems

Autonomous driving systems increasingly rely on multi-agent architectures powered by large language models (LLMs), where specialized agents collaborate to perceive, reason, and plan. A key component of these systems

medium relevance attack
Paper 2510.00586v2

Eyes-on-Me: Scalable RAG Poisoning through Transferable Attention-Steering Attractors

data poisoning and show that modular, reusable components pose a practical threat to modern AI systems. They also reveal a strong link between attention concentration and model outputs, informing interpretability

medium relevance attack
Paper 2601.13112v1

CODE: A Contradiction-Based Deliberation Extension Framework for Overthinking Attacks on Retrieval-Augmented Generation

multi-step self-verification. However, recent studies have shown that reasoning models suffer from overthinking attacks, where models are tricked to generate unnecessarily high number of reasoning tokens. In this

high relevance attack
Paper 2512.13207v2

Evaluating Adversarial Attacks on Federated Learning for Temperature Forecasting

high-resolution spatiotemporal forecasts that can surpass traditional numerical models, while FL allows institutions in different locations to collaboratively train models without sharing raw data, addressing efficiency and security concerns

high relevance attack
Paper 2601.14054v2

SecureSplit: Mitigating Backdoor Attacks in Split Learning

trained model. To address this vulnerability, we introduce SecureSplit, a defense mechanism tailored to SL. SecureSplit applies a dimensionality transformation strategy to accentuate subtle differences between benign and poisoned embeddings

high relevance attack
Paper 2601.15474v1

Multi-Targeted Graph Backdoor Attack

based attack. Our analysis on four GNN models confirms the generalization capability of our attack which is effective regardless of the GNN model architectures and training parameters settings. We further

high relevance attack
Paper 2602.08446v1

RIFLE: Robust Distillation-based FL for Deep Model Deployment on Resource-Constrained IoT Networks

TinyML models, collaboratively train global models by sharing gradients with a central server while preserving data privacy. However, as data heterogeneity and task complexity increase, TinyML models often become insufficient

medium relevance benchmark
Paper 2511.01268v1

Rescuing the Unpoisoned: Efficient Defense against Knowledge Corruption Attacks on RAG Systems

poisoning) attacks in practical RAG deployments. RAGDefender operates during the post-retrieval phase, leveraging lightweight machine learning techniques to detect and filter out adversarial content without requiring additional model training

high relevance tool
Previous Page 7 of 10 Next