Paper 2602.10481v1

Protecting Context and Prompts: Deterministic Security for Non-Deterministic AI

Large Language Model (LLM) applications are vulnerable to prompt injection and context manipulation attacks that traditional security models cannot prevent. We introduce two novel primitives--authenticated prompts and authenticated context

medium relevance benchmark
Paper 2602.08062v1

Efficient and Adaptable Detection of Malicious LLM Prompts via Bootstrap Aggregation

However, these systems remain susceptible to malicious prompts that induce unsafe or policy-violating behavior through harmful requests, jailbreak techniques, and prompt injection attacks. Existing defenses face fundamental limitations: black

medium relevance defense
Paper 2601.05755v2

VIGIL: Defending LLM Agents Against Tool Stream Injection via Verify-Before-Commit

agents operating in open environments face escalating risks from indirect prompt injection, particularly within the tool stream where manipulated metadata and runtime feedback hijack execution flow. Existing defenses encounter

high relevance tool
Paper 2603.01564v1

From Secure Agentic AI to Secure Agentic Web: Challenges, Threats, and Future Directions

Secure Agentic Web. We first summarize a component-aligned threat taxonomy covering prompt abuse, environment injection, memory attacks, toolchain abuse, model tampering, and agent network attacks. We then review defense

medium relevance survey
Paper 2510.21057v2

Soft Instruction De-escalation Defense

agentic systems that interact with an external environment; this makes them susceptible to prompt injections when dealing with untrusted data. To overcome this limitation, we propose SIC (Soft Instruction Control

medium relevance defense
Paper 2602.10498v1

When Skills Lie: Hidden-Comment Injection in LLM Agents

Skills to describe available tools and recommended procedures. We study a hidden-comment prompt injection risk in this documentation layer: when a Markdown Skill is rendered to HTML, HTML comment

high relevance attack
Paper 2512.08290v2

Systematization of Knowledge: Security and Safety in the Model Context Protocol Ecosystem

taxonomy of risks in the MCP ecosystem, distinguishing between adversarial security threats (e.g., indirect prompt injection, tool poisoning) and epistemic safety hazards (e.g., alignment failures in distributed tool delegation

medium relevance survey
Paper 2510.15994v1

MCP Security Bench (MSB): Benchmarking Attacks Against Model Context Protocol in LLM Agents

handling. MSB contributes: (1) a taxonomy of 12 attacks including name-collision, preference manipulation, prompt injections embedded in tool descriptions, out-of-scope parameter requests, user-impersonating responses, false-error

high relevance benchmark
Paper 2601.02377v1

Trust in LLM-controlled Robotics: a Survey of Security Threats, Defenses and Challenges

taxonomy of attack vectors, covering topics such as jailbreaking, backdoor attacks, and multi-modal prompt injection. In response, we analyze and categorize a range of defense mechanisms, from formal safety

medium relevance survey
Paper 2510.22628v1

Sentra-Guard: A Multilingual Human-AI Framework for Real-Time Defense Against Adversarial LLM Jailbreaks

time modular defense system named Sentra-Guard. The system detects and mitigates jailbreak and prompt injection attacks targeting large language models (LLMs). The framework uses a hybrid architecture with FAISS

high relevance tool
Paper 2601.21083v3

OpenSec: Measuring Incident Response Agent Calibration Under Adversarial Evidence

OpenSec, a dual-control reinforcement learning (RL) environment that evaluates IR agents under realistic prompt injection scenarios with execution-based scoring: time-to-first-containment (TTFC), evidence-gated action rate

medium relevance attack
Paper 2512.16962v1

MemoryGraft: Persistent Compromise of LLM Agents via Poisoned Experience Retrieval

implanting malicious successful experiences into the agent's long-term memory. Unlike traditional prompt injections that are transient, or standard RAG poisoning that targets factual knowledge, MemoryGraft exploits the agent

medium relevance benchmark
Paper 2509.23994v2

Policy-as-Prompt: Turning AI Governance Rules into Guardrails for AI Agents

integrated with a human-in-the-loop review process. Evaluations show our system reduces prompt-injection risk, blocks out-of-scope requests, and limits toxic outputs. It also generates auditable

medium relevance defense
Paper 2511.16347v1

The Shawshank Redemption of Embodied AI: Understanding and Benchmarking Indirect Environmental Jailbreaks

prompts to the embodied agent. In this paper, we propose, for the first time, indirect environmental jailbreak (IEJ), a novel attack to jailbreak embodied AI via indirect prompt injected into

high relevance benchmark
Paper 2603.03205v1

Learning When to Act or Refuse: Guarding Agentic Reasoning Models for Safe Multi-Step Tool Use

Thinking, and Phi-4, and across out-of-distribution benchmarks spanning harmful tasks, prompt injection, benign tool use, and cross-domain privacy leakage. MOSAIC reduces harmful behavior

medium relevance benchmark
Paper 2510.07809v2

Practical and Stealthy Touch-Guided Jailbreak Attacks on Deployed Mobile Vision-Language Agents

safety alignment of LVLMs. Moreover, we developed three representative Android applications and curated a prompt-injection dataset for mobile agents. We evaluated our attack across multiple LVLM backends, including closed

high relevance attack
Paper 2602.23956v2

SwitchCraft: Training-Free Multi-Event Video Generation with Attention Controls

training-free framework for multi-event video generation. Our key insight is that uniform prompt injection across time ignores the correspondence between events and frames. To this end, we introduce

medium relevance attack
Paper 2602.23956v1

SwitchCraft: Training-Free Multi-Event Video Generation with Attention Controls

training-free framework for multi-event video generation. Our key insight is that uniform prompt injection across time ignores the correspondence between events and frames. To this end, we introduce

medium relevance attack
Paper 2511.04694v4

Reasoning Up the Instruction Ladder for Controllable Language Models

inputs and predefined higher-priority policies, our trained model enhances robustness against jailbreak and prompt injection attacks, providing up to a 20% reduction in attack success rate (ASR). These results

medium relevance benchmark
Paper 2510.00181v2

CHAI: Command Hijacking against embodied AI

this paper, we introduce CHAI (Command Hijacking against embodied AI), a physical environment indirect prompt injection attack that exploits the multimodal language interpretation abilities of AI models. CHAI embeds deceptive

medium relevance attack
Previous Page 8 of 15 Next