Paper 2602.05066v2

Bypassing AI Control Protocols via Agent-as-a-Proxy Attacks

agents automate critical workloads, they remain vulnerable to indirect prompt injection (IPI) attacks. Current defenses rely on monitoring protocols that jointly evaluate an agent's Chain-of-Thought

high relevance attack
Paper 2603.07191v2

Governance Architecture for Autonomous Agent Systems: Threats, Framework, and Engineering Practice

Autonomous agents powered by large language models introduce a class of execution-layer vulnerabilities -- prompt injection, retrieval poisoning, and uncontrolled tool invocation -- that existing guardrails fail to address systematically

medium relevance benchmark
CVE CRITICAL CVE-2026-27966

result, an attacker can execute arbitrary Python and OS commands on the server via prompt injection, leading to full Remote Code Execution (RCE). Version 1.8.0 fixes the issue

CVSS 9.8 langflow View details
Paper 2602.11416v1

Optimizing Agent Planning for Security and Autonomy

Indirect prompt injection attacks threaten AI agents that execute consequential actions, motivating deterministic system-level defenses. Such defenses can provably block unsafe actions by enforcing confidentiality and integrity policies

medium relevance benchmark
Paper 2601.17549v1

Breaking the Protocol: Security Analysis of the Model Context Protocol Specification and Prompt Injection Vulnerabilities in Tool-Integrated LLM Agents

servers to claim arbitrary permissions, (2) bidirectional sampling without origin authentication enabling server-side prompt injection, and (3) implicit trust propagation in multi-server configurations. We implement \textsc{MCPBench

high relevance tool

output. An attacker who can supply or influence the parsed text (for example via prompt injection in downstream applications that pass LLM output directly into MRKLOutputParser.parse

CVSS 7.5 langchain View details
Paper 2512.04785v1

ASTRIDE: A Security Threat Modeling Platform for Agentic-AI Applications

large language models (LLMs). However, these systems introduce novel and evolving security challenges, including prompt injection attacks, context poisoning, model manipulation, and opaque agent-to-agent communication, that

medium relevance tool
Paper 2510.11837v1

Countermind: A Multi-Layered Security Architecture for Large Language Models

Large Language Model (LLM) applications is fundamentally challenged by "form-first" attacks like prompt injection and jailbreaking, where malicious instructions are embedded within user inputs. Conventional defenses, which rely

medium relevance benchmark
CVE CRITICAL CVE-2025-46059

langchain-ai v0.3.51 was discovered to contain an indirect prompt injection vulnerability in the GmailToolkit component. This vulnerability allows attackers to execute arbitrary code and compromise the application

Paper 2603.20976v1

Detection of adversarial intent in Human-AI teams using LLMs

useful, it also exposes them to a broad range of attacks, including data poisoning, prompt injection, and even prompt engineering. Through these attack vectors, malicious actors can manipulate

medium relevance attack

MCP Server Kubernetes is an MCP Server that can connect

CVSS 8.8 mcp-server-kubernetes View details
Paper 2510.24801v1

Fortytwo: Swarm Inference with Peer-Ranked Consensus

evaluation indicates higher accuracy and strong resilience to adversarial and noisy free-form prompting (e.g., prompt-injection degradation of only 0.12% versus 6.20% for a monolithic single-model baseline), while

medium relevance benchmark
Paper 2602.19547v1

CIBER: A Comprehensive Benchmark for Security Evaluation of Code Interpreter Agents

vulnerability of code interpreter agents against four major types of adversarial attacks: Direct/Indirect Prompt Injection, Memory Poisoning, and Prompt-based Backdoor. We evaluate six foundation models across two representative code

medium relevance benchmark
Paper 2512.04520v1

Boundary-Aware Test-Time Adaptation for Zero-Shot Medical Image Segmentation

test-time adaptation. This framework integrates two key mechanisms: (1) The encoder-level Gaussian prompt injection embeds Gaussian-based prompts directly into the image encoder, providing explicit guidance for initial

medium relevance benchmark
CVE CRITICAL CVE-2023-32785

Langchain SQL Injection vulnerability

CVSS 9.8 langchain View details

From versions 0.3.79 and prior and 1.0.0 to 1.0.6, a template injection vulnerability exists in LangChain's prompt template system that allows attackers to access Python object internals through template

langchain-core View details
Paper 2601.06884v1

Paraphrasing Adversarial Attack on LLM-as-a-Reviewer

growing attention, making it essential to examine their potential vulnerabilities. Prior attacks rely on prompt injection, which alters manuscript content and conflates injection susceptibility with evaluation robustness. We propose

high relevance survey
Paper 2601.03868v2

What Matters For Safety Alignment?

services, highlighting an urgent need for architectural and deployment safeguards. Fourth, roleplay, prompt injection, and gradient-based search for adversarial prompts are the predominant methodologies for eliciting unaligned behaviors

medium relevance defense
Paper 2512.19011v2

PromptScreen: Efficient Jailbreak Mitigation Using Semantic Linear Classification in a Multi-Staged Pipeline

Prompt injection and jailbreaking attacks pose persistent security challenges to large language model (LLM)-based systems. We present PromptScreen, an efficient and systematically evaluated defense architecture that mitigates these threats

high relevance attack
Paper 2512.14860v1

Penetration Testing of Agentic AI: A Comparative Security Analysis Across Models and Frameworks

functionality of a university information management system and 13 distinct attack scenarios that span prompt injection, Server Side Request Forgery (SSRF), SQL injection, and tool misuse. Our 130 total test

medium relevance tool
Previous Page 7 of 15 Next