Paper 2510.23675v3

QueryIPI: Query-agnostic Indirect Prompt Injection on Coding Agents

high-privilege system access, creating a high-stakes attack surface. Prior work on Indirect Prompt Injection (IPI) is mainly query-specific, requiring particular user queries as triggers and leading

high relevance attack
Paper 2602.18514v1

Trojan Horses in Recruiting: A Red-Teaming Case Study on Indirect Prompt Injection in Standard vs. Reasoning Models

automated decision-making pipelines, specifically within Human Resources (HR), the security implications of Indirect Prompt Injection (IPI) become critical. While a prevailing hypothesis posits that "Reasoning" or "Chain-of-Thought

high relevance attack
Paper 2603.15417v1

Amplification Effects in Test-Time Reinforcement Learning: Safety and Reasoning Vulnerabilities

labels. However, this reliance on test data also makes TTT methods vulnerable to harmful prompt injections. In this paper, we investigate safety vulnerabilities of TTT methods, where we study

medium relevance defense
Paper 2602.22450v1

Silent Egress: When Implicit Prompt Injection Makes LLM Agents Leak Without a Trace

URLs and calling external tools. We show that this workflow gives rise to implicit prompt injection: adversarial instructions embedded in automatically generated URL previews, including titles, metadata, and snippets

high relevance attack
Paper 2603.15714v1

How Vulnerable Are AI Agents to Indirect Prompt Injections? Insights from a Large-Scale Public Competition

data sources such as emails, documents, and code repositories. This creates exposure to indirect prompt injection attacks, where adversarial instructions embedded in external content manipulate agent behavior without user awareness

high relevance attack
Paper 2510.03204v1

FocusAgent: Simple Yet Effective Ways of Trimming the Large Context of Web Agents

computational cost processing; moreover, processing full pages exposes agents to security risks such as prompt injection. Existing pruning strategies either discard relevant content or retain irrelevant context, leading to suboptimal

medium relevance benchmark
Paper 2601.10923v2

Hidden-in-Plain-Text: A Benchmark for Social-Web Indirect Prompt Injection in RAG

amplifying both their usefulness and their attack surface. Most notably, indirect prompt injection and retrieval poisoning attack the web-native carriers that survive ingestion pipelines and are very concerning

high relevance benchmark
Paper 2602.20720v1

AdapTools: Adaptive Tool-based Indirect Prompt Injection Attacks on Agentic LLMs

powerful for complex task execution. However, this advancement introduces critical security vulnerabilities, particularly indirect prompt injection (IPI) attacks. Existing attack methods are limited by their reliance on static patterns

high relevance tool
Paper 2602.03117v2

AgentDyn: A Dynamic Open-Ended Benchmark for Evaluating Prompt Injection Attacks of Real-World Agent Security System

However, the external data which agent consumes also leads to the risk of indirect prompt injection attacks, where malicious instructions embedded in third-party content hijack agent behavior. Guided

high relevance benchmark
Paper 2512.20986v1

AegisAgent: An Autonomous Defense Agent Against Prompt Injection Attacks in LLM-HARs

understanding. However, the reliability of these systems is critically undermined by their vulnerability to prompt injection attacks, where attackers deliberately input deceptive instructions into LLMs. Traditional defenses, based on static

high relevance attack
Paper 2510.00451v1

A Call to Action for a Secure-by-Design Generative AI Paradigm

Large language models have gained widespread prominence, yet their vulnerability to prompt injection and other adversarial attacks remains a critical concern. This paper argues for a security-by-design

medium relevance attack
Paper 2512.23128v1

It's a TRAP! Task-Redirecting Agent Persuasion Benchmark for Web Agents

professional networking. Their reliance on dynamic web content, however, makes them vulnerable to prompt injection attacks: adversarial instructions hidden in interface elements that persuade the agent to divert from

medium relevance benchmark
Paper 2602.05484v1

Clouding the Mirror: Stealthy Prompt Injection Attacks Targeting LLM-based Phishing Detection

phishing site. While these approaches are promising, LLMs are inherently vulnerable to prompt injection (PI). Because attackers can fully control various elements of phishing sites, this creates the potential

high relevance attack
Paper 2512.23557v1

Toward Trustworthy Agentic AI: A Multimodal Framework for Preventing Prompt Injection Attacks

GraphChain. Nevertheless, this agentic environment increases the probability of the occurrence of multimodal prompt injection (PI) attacks, in which concealed or malicious instructions carried in text, pictures, metadata, or agent

high relevance tool
Paper 2510.09462v2

Adaptive Attacks on Trusted Monitors Subvert AI Control Protocols

simple adaptive attack vector by which the attacker embeds publicly known or zero-shot prompt injections in the model outputs. Using this tactic, frontier models consistently evade diverse monitors

high relevance attack
Paper 2602.07398v1

AgentSys: Secure and Dynamic LLM Agents Through Explicit Hierarchical Memory Management

Indirect prompt injection threatens LLM agents by embedding malicious instructions in external content, enabling unauthorized actions and data theft. LLM agents maintain working memory through their context window, which stores

medium relevance attack
Paper 2602.05066v2

Bypassing AI Control Protocols via Agent-as-a-Proxy Attacks

agents automate critical workloads, they remain vulnerable to indirect prompt injection (IPI) attacks. Current defenses rely on monitoring protocols that jointly evaluate an agent's Chain-of-Thought

high relevance attack
Paper 2603.07191v2

Governance Architecture for Autonomous Agent Systems: Threats, Framework, and Engineering Practice

Autonomous agents powered by large language models introduce a class of execution-layer vulnerabilities -- prompt injection, retrieval poisoning, and uncontrolled tool invocation -- that existing guardrails fail to address systematically

medium relevance benchmark
Paper 2602.11416v1

Optimizing Agent Planning for Security and Autonomy

Indirect prompt injection attacks threaten AI agents that execute consequential actions, motivating deterministic system-level defenses. Such defenses can provably block unsafe actions by enforcing confidentiality and integrity policies

medium relevance benchmark
Paper 2601.17549v1

Breaking the Protocol: Security Analysis of the Model Context Protocol Specification and Prompt Injection Vulnerabilities in Tool-Integrated LLM Agents

servers to claim arbitrary permissions, (2) bidirectional sampling without origin authentication enabling server-side prompt injection, and (3) implicit trust propagation in multi-server configurations. We implement \textsc{MCPBench

high relevance tool
Previous Page 6 of 15 Next