Paper 2510.16794v1

Black-box Optimization of LLM Outputs by Asking for Directions

general method to three attack scenarios: adversarial examples for vision-LLMs, jailbreaks and prompt injections. Our attacks successfully generate malicious inputs against systems that only expose textual outputs, thereby dramatically

medium relevance attack
Paper 2601.07835v1

SecureCAI: Injection-Resilient LLM Assistants for Cybersecurity Operations

triage, and malware explanation; however, deployment in adversarial cybersecurity environments exposes critical vulnerabilities to prompt injection attacks where malicious instructions embedded in security artifacts manipulate model behavior. This paper introduces

high relevance attack
Paper 2602.15859v1

From Transcripts to AI Agents: Knowledge Extraction, RAG Integration, and Robust Evaluation of Conversational AI Assistants

call coverage, factual accuracy, and human escalation behavior. Additional red teaming assesses robustness against prompt injection, out-of-scope, and out-of-context attacks. Experiments are conducted in the Real

medium relevance benchmark
Paper 2512.20293v2

AprielGuard

Existing moderation tools often treat safety risks (e.g. toxicity, bias) and adversarial threats (e.g. prompt injections, jailbreaks) as separate problems, limiting their robustness and generalizability. We introduce AprielGuard

medium relevance survey
Paper 2512.10449v3

When Reject Turns into Accept: Quantifying the Vulnerability of LLM-Based Scientific Reviewers to Indirect Prompt Injection

Driven by surging submission volumes, scientific peer review has catalyzed

high relevance survey
Paper 2509.23694v4

SafeSearch: Automated Red-Teaming of LLM-Based Search Agents

Using this, we generate 300 test cases spanning five risk categories (e.g., misinformation and prompt injection) and evaluate three search agent scaffolds across 17 representative LLMs. Our results reveal substantial

high relevance benchmark
Paper 2603.19423v1

The Autonomy Tax: Defense Training Breaks LLM Agents

autonomously complete complex multi-step tasks. Practitioners deploy defense-trained models to protect against prompt injection attacks that manipulate agent behavior through malicious observations or retrieved content. We reveal

medium relevance defense
Paper 2602.01378v1

Context Dependence and Reliability in Autoregressive Language Models

unpredictable shifts in attribution scores, undermining interpretability and raising concerns about risks like prompt injection. This work addresses the challenge of distinguishing essential context elements from correlated ones. We introduce

medium relevance attack
Paper 2511.23174v1

Are LLMs Good Safety Agents or a Propaganda Engine?

approaches (erasing the concept of politics); and, 2) vulnerability of models on PSP through prompt injection attacks (PIAs). Associating censorship with refusals on content with masked implicit intent, we find

medium relevance defense
Paper 2603.11853v1

OpenClaw PRISM: A Zero-Fork, Defense-in-Depth Runtime Security Layer for Tool-Augmented LLM Agents

augmented LLM agents introduce security risks that extend beyond user-input filtering, including indirect prompt injection through fetched content, unsafe tool execution, credential leakage, and tampering with local control files

medium relevance tool
Paper 2603.01574v1

DualSentinel: A Lightweight Framework for Detecting Targeted Attacks in Black-box LLM via Dual Entropy Lull Pattern

APIs, but their trustworthiness may be critically undermined by targeted attacks like backdoor and prompt injection attacks, which secretly force LLMs to generate specific malicious sequences. Existing defensive approaches

high relevance tool
Paper 2603.21975v1

SecureBreak -- A dataset towards safe and secure models

growing body of scientific literature showing that attacks, such as jailbreaking and prompt injection, can bypass existing security alignment mechanisms. As a consequence, additional security strategies are needed both

medium relevance benchmark
Paper 2603.20381v1

The production of meaning in the processing of natural language

word order, and discuss the information-theoretic constraints that genuine contextuality imposes on prompt injection defenses and its human analogue, whereby careful construction and maintenance of social contextuality

medium relevance benchmark
Paper 2603.17419v1

Caging the Agents: A Zero Trust Security Architecture for Autonomous AI in Healthcare

instructions, sensitive information disclosure, identity spoofing, cross-agent propagation of unsafe practices, and indirect prompt injection through external resources [7]. In healthcare environments processing Protected Health Information, every such vulnerability

medium relevance attack
Paper 2603.18063v1

MCP-38: A Comprehensive Threat Taxonomy for Model Context Protocol Systems (v1.0)

addresses critical threats arising from MCP's semantic attack surface (tool description poisoning, indirect prompt injection, parasitic tool chaining, and dynamic trust violations), none of which are adequately captured

medium relevance survey
Paper 2603.16215v1

CoMAI: A Collaborative Multi-Agent Framework for Robust and Equitable Interview Evaluation

scoring, and summarization. These agents work collaboratively to provide multi-layered security defenses against prompt injection, support multidimensional evaluation with adaptive difficulty adjustment, and enable rubric-based structured scoring that

medium relevance benchmark
Paper 2603.12230v1

Security Considerations for Artificial Intelligence Agents

across tools, connectors, hosting boundaries, and multi-agent coordination, with particular emphasis on indirect prompt injection, confused-deputy behavior, and cascading failures in long-running workflows. We then assess current

medium relevance benchmark
Paper 2603.11619v1

Taming OpenClaw: Security Analysis and Mitigation of Autonomous LLM Agent Threats

execution, and systematically examine compound threats across the agent's operational lifecycle, including indirect prompt injection, skill supply chain contamination, memory poisoning, and intent drift. Through detailed case studies

medium relevance defense
Paper 2603.11460v2

Follow the Saliency: Supervised Saliency for Retrieval-augmented Dense Video Captioning

that drives retrieval via saliency-guided segmentation and informs caption generation through explicit Saliency Prompts injected into the decoder. By enforcing saliency-constrained segmentation, our method produces temporally coherent segments

low relevance benchmark
Paper 2603.10163v1

Compatibility at a Cost: Systematic Discovery and Exploitation of MCP Clause-Compliance Vulnerabilities

attack surface that allows adversaries to achieve multiple attacks (e.g, silent prompt injection, DoS, etc.), named as \emph{compatibility-abusing attacks}. In this work, we present the first systematic framework

high relevance attack
Previous Page 9 of 15 Next