Tool HIGH
Chong Xiang, Drew Zagieboylo, Shaona Ghosh +5 more
AI agents, predominantly powered by large language models (LLMs), are vulnerable to indirect prompt injection, in which malicious instructions...
1 months ago cs.CR cs.AI
PDF
Tool HIGH
KrishnaSaiReddy Patil
LLM-based chatbots in government services face critical security gaps. Multi-turn adversarial attacks achieve over 90% success against current...
1 months ago cs.CR cs.AI
PDF
Tool HIGH
Tran Duong Minh Dai, Triet Huynh Minh Le, M. Ali Babar +2 more
Although Graph Neural Networks (GNNs) have shown promise for smart contract vulnerability detection, they still face significant limitations....
1 months ago cs.LG cs.CR
PDF
Tool LOW
Cole Walsh, Rodica Ivan
Automated systems have been widely adopted across the educational testing industry for open-response assessment and essay scoring. These systems...
1 months ago cs.CL cs.AI cs.CY
PDF
Tool HIGH
Ron Litvak
System prompt configuration can make the difference between near-total phishing blindness and near-perfect detection in LLM email agents. We present...
1 months ago cs.CR cs.AI
PDF
Tool MEDIUM
Aymen Bouferroum, Valeria Loscri, Abderrahim Benslimane
The Industrial Internet of Things (IIoT) introduces significant security challenges as resource-constrained devices become increasingly integrated...
1 months ago cs.CR cs.LG
PDF
Tool HIGH
Charoes Huang, Xin Huang, Amin Milani Fard
Prompt injection is listed as the number-one vulnerability class in the OWASP Top 10 for LLM Applications that can subvert LLM guardrails, disclose...
1 months ago cs.CR cs.SE
PDF
Tool LOW
Octavian Untila
An autonomous AI ecosystem (SUBSTRATE S3), generating product specifications without explicit instructions about formal methods, independently...
1 months ago cs.SE cs.AI
PDF
Tool MEDIUM
Uchi Uchibeke
AI agents today have passwords but no permission slips. They execute tool calls (fund transfers, database queries, shell commands, sub-agent...
1 months ago cs.CR cs.AI
PDF
Tool MEDIUM
Vincent Siu, Jingxuan He, Kyle Montgomery +4 more
Security in LLM agents is inherently contextual. For example, the same action taken by an agent may represent legitimate behavior or a security...
1 months ago cs.CR cs.AI
PDF
Tool HIGH
Md Takrim Ul Alam, Akif Islam, Mohd Ruhul Ameen +2 more
Large language models (LLMs) deployed behind APIs and retrieval-augmented generation (RAG) stacks are vulnerable to prompt injection attacks that may...
Tool MEDIUM
Taiwo Onitiju, Iman Vakilinia
Large Language Models increasingly power critical infrastructure from healthcare to finance, yet their vulnerability to adversarial manipulation...
1 months ago cs.CR cs.AI
PDF
Tool MEDIUM
Zhouwei Zhai, Mengxiang Chen, Anmeng Zhang
Large language models offer transformative potential for e-commerce search by enabling intent-aware recommendations. However, their industrial...
Tool LOW
Cosimo Spera
Customer service automation is undergoing a structural transformation. The dominant paradigm is shifting from scripted chatbots and single-agent...
Tool HIGH
Yihao Zhang, Zeming Wei, Xiaokun Luan +7 more
Autonomous LLM-based agents increasingly operate as long-running processes forming densely interconnected multi-agent ecosystems, whose security...
1 months ago cs.CR cs.AI cs.LG
PDF
Tool HIGH
Yihao Zhang, Zeming Wei, Xiaokun Luan +7 more
Autonomous LLM-based agents increasingly operate as long-running processes forming densely interconnected multi-agent ecosystems, whose security...
1 months ago cs.CR cs.AI cs.LG
PDF
Tool MEDIUM
Zhuoshang Wang, Yubing Ren, Yanan Cao +3 more
While watermarking serves as a critical mechanism for LLM provenance, existing secret-key schemes tightly couple detection with injection, requiring...
1 months ago cs.CR cs.CL
PDF
Tool MEDIUM
Ziling Zhou
AI agents dynamically acquire capabilities at runtime via MCP and A2A, yet no framework detects when capabilities change post-authorization. We term...
Tool MEDIUM
Ziling Zhou
AI agents dynamically acquire tools, orchestrate sub-agents, and transact across organizational boundaries, yet no existing security layer verifies...
Tool MEDIUM
Jiangrong Wu, Zitong Yao, Yuhong Nan +1 more
Tool-augmented LLM agents increasingly rely on multi-step, multi-tool workflows to complete real tasks. This design expands the attack surface,...
2 months ago cs.SE cs.CR
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial