Paper 2511.16347v1

The Shawshank Redemption of Embodied AI: Understanding and Benchmarking Indirect Environmental Jailbreaks

prompts to the embodied agent. In this paper, we propose, for the first time, indirect environmental jailbreak (IEJ), a novel attack to jailbreak embodied AI via indirect prompt injected into

high relevance benchmark
Paper 2603.03205v1

Learning When to Act or Refuse: Guarding Agentic Reasoning Models for Safe Multi-Step Tool Use

Thinking, and Phi-4, and across out-of-distribution benchmarks spanning harmful tasks, prompt injection, benign tool use, and cross-domain privacy leakage. MOSAIC reduces harmful behavior

medium relevance benchmark
Paper 2510.07809v2

Practical and Stealthy Touch-Guided Jailbreak Attacks on Deployed Mobile Vision-Language Agents

safety alignment of LVLMs. Moreover, we developed three representative Android applications and curated a prompt-injection dataset for mobile agents. We evaluated our attack across multiple LVLM backends, including closed

high relevance attack
Paper 2602.23956v2

SwitchCraft: Training-Free Multi-Event Video Generation with Attention Controls

training-free framework for multi-event video generation. Our key insight is that uniform prompt injection across time ignores the correspondence between events and frames. To this end, we introduce

medium relevance attack
Paper 2602.23956v1

SwitchCraft: Training-Free Multi-Event Video Generation with Attention Controls

training-free framework for multi-event video generation. Our key insight is that uniform prompt injection across time ignores the correspondence between events and frames. To this end, we introduce

medium relevance attack
Paper 2511.04694v4

Reasoning Up the Instruction Ladder for Controllable Language Models

inputs and predefined higher-priority policies, our trained model enhances robustness against jailbreak and prompt injection attacks, providing up to a 20% reduction in attack success rate (ASR). These results

medium relevance benchmark
Paper 2510.00181v2

CHAI: Command Hijacking against embodied AI

this paper, we introduce CHAI (Command Hijacking against embodied AI), a physical environment indirect prompt injection attack that exploits the multimodal language interpretation abilities of AI models. CHAI embeds deceptive

medium relevance attack
Paper 2603.13847v1

Sirens' Whisper: Inaudible Near-Ultrasonic Jailbreaks of Speech-Driven LLMs

case study, the underlying covert acoustic channel enables a broader class of high-fidelity prompt-injection and commandexecution attacks

high relevance attack
Paper 2510.16794v1

Black-box Optimization of LLM Outputs by Asking for Directions

general method to three attack scenarios: adversarial examples for vision-LLMs, jailbreaks and prompt injections. Our attacks successfully generate malicious inputs against systems that only expose textual outputs, thereby dramatically

medium relevance attack
Paper 2601.07835v1

SecureCAI: Injection-Resilient LLM Assistants for Cybersecurity Operations

triage, and malware explanation; however, deployment in adversarial cybersecurity environments exposes critical vulnerabilities to prompt injection attacks where malicious instructions embedded in security artifacts manipulate model behavior. This paper introduces

high relevance attack
Paper 2602.15859v1

From Transcripts to AI Agents: Knowledge Extraction, RAG Integration, and Robust Evaluation of Conversational AI Assistants

call coverage, factual accuracy, and human escalation behavior. Additional red teaming assesses robustness against prompt injection, out-of-scope, and out-of-context attacks. Experiments are conducted in the Real

medium relevance benchmark
Paper 2512.20293v2

AprielGuard

Existing moderation tools often treat safety risks (e.g. toxicity, bias) and adversarial threats (e.g. prompt injections, jailbreaks) as separate problems, limiting their robustness and generalizability. We introduce AprielGuard

medium relevance survey
Paper 2512.10449v3

When Reject Turns into Accept: Quantifying the Vulnerability of LLM-Based Scientific Reviewers to Indirect Prompt Injection

Driven by surging submission volumes, scientific peer review has catalyzed

high relevance survey
Paper 2509.23694v4

SafeSearch: Automated Red-Teaming of LLM-Based Search Agents

Using this, we generate 300 test cases spanning five risk categories (e.g., misinformation and prompt injection) and evaluate three search agent scaffolds across 17 representative LLMs. Our results reveal substantial

high relevance benchmark
Paper 2603.19423v1

The Autonomy Tax: Defense Training Breaks LLM Agents

autonomously complete complex multi-step tasks. Practitioners deploy defense-trained models to protect against prompt injection attacks that manipulate agent behavior through malicious observations or retrieved content. We reveal

medium relevance defense
Paper 2602.01378v1

Context Dependence and Reliability in Autoregressive Language Models

unpredictable shifts in attribution scores, undermining interpretability and raising concerns about risks like prompt injection. This work addresses the challenge of distinguishing essential context elements from correlated ones. We introduce

medium relevance attack
Paper 2511.23174v1

Are LLMs Good Safety Agents or a Propaganda Engine?

approaches (erasing the concept of politics); and, 2) vulnerability of models on PSP through prompt injection attacks (PIAs). Associating censorship with refusals on content with masked implicit intent, we find

medium relevance defense
Paper 2603.11853v1

OpenClaw PRISM: A Zero-Fork, Defense-in-Depth Runtime Security Layer for Tool-Augmented LLM Agents

augmented LLM agents introduce security risks that extend beyond user-input filtering, including indirect prompt injection through fetched content, unsafe tool execution, credential leakage, and tampering with local control files

medium relevance tool
Paper 2603.01574v1

DualSentinel: A Lightweight Framework for Detecting Targeted Attacks in Black-box LLM via Dual Entropy Lull Pattern

APIs, but their trustworthiness may be critically undermined by targeted attacks like backdoor and prompt injection attacks, which secretly force LLMs to generate specific malicious sequences. Existing defensive approaches

high relevance tool
CVE UNKNOWN CVE-2024-48919

Cursor is a code editor built for programming with AI

Previous Page 9 of 15 Next