Survey HIGH
Bhavuk Jain, Sercan Ö. Arık, Hardeo K. Thakur
Multimodal large language models (MLLMs) integrate information from multiple modalities such as text, images, audio, and video, enabling complex...
1 months ago cs.CR cs.AI
PDF
Attack HIGH
Eyal Hadad, Mordechai Guri
On-device Vision-Language Models (VLMs) promise data privacy via local execution. However, we show that the architectural shift toward Dynamic...
1 months ago cs.CR cs.AI cs.LG
PDF
Attack HIGH
Younes Salmi, Hanna Bogucka
Deep learning (DL) has been widely studied for assisting applications of modern wireless communications. One of the applications is automatic...
Attack HIGH
Younes Salmi, Hanna Bogucka
Deep Learning (DL) has become a key technology that assists radio frequency (RF) signal classification applications, such as modulation...
Attack HIGH
Younes Salmi, Hanna Bogucka
This paper investigates the susceptibility to model integrity attacks that overload virtual machines assigned by the k-means algorithm used for...
1 months ago cs.CR cs.LG
PDF
Attack HIGH
Hieu Xuan Le, Benjamin Goh, Quy Anh Tang
Prompt attacks, including jailbreaks and prompt injections, pose a critical security risk to Large Language Model (LLM) systems. In production,...
Attack HIGH
Haozhen Wang, Haoyue Liu, Jionghao Zhu +3 more
Large Language Models (LLMs) have demonstrated remarkable performance across a wide range of applications. However, their practical deployment is...
1 months ago cs.CR cs.AI
PDF
Tool HIGH
Ron Litvak
System prompt configuration can make the difference between near-total phishing blindness and near-perfect detection in LLM email agents. We present...
1 months ago cs.CR cs.AI
PDF
Attack HIGH
Alexander Panfilov, Peter Romov, Igor Shilov +3 more
LLM agents like Claude Code can not only write code but also be used for autonomous AI research and engineering \citep{rank2026posttrainbench,...
1 months ago cs.LG cs.AI cs.CR
PDF
Attack HIGH
Joseph G. Zalameda, Megan A. Witherow, Alexander M. Glandon +2 more
Machine learning models trained on small data sets for security applications are especially vulnerable to adversarial attacks. Person identification...
1 months ago cs.LG cs.CR cs.CV
PDF
Attack HIGH
Yulin Shen, Xudong Pan, Geng Hong +1 more
Recent advances in the Model Context Protocol (MCP) have enabled large language models (LLMs) to invoke external tools with unprecedented ease. This...
1 months ago cs.CR cs.AI
PDF
Attack HIGH
Qianlong Lan, Anuj Kaul
Deploying large language models (LLMs) as autonomous browser agents exposes a significant attack surface in the form of Indirect Prompt Injection...
1 months ago cs.CR cs.AI
PDF
Attack HIGH
Xingyu Zhu, Beier Zhu, Shuo Wang +4 more
As vision-language models (VLMs) are increasingly deployed in open-world scenarios, they can be easily induced by visual jailbreak attacks to...
Tool HIGH
Charoes Huang, Xin Huang, Amin Milani Fard
Prompt injection is listed as the number-one vulnerability class in the OWASP Top 10 for LLM Applications that can subvert LLM guardrails, disclose...
1 months ago cs.CR cs.SE
PDF
Attack HIGH
Zihui Chen, Yuling Wang, Pengfei Jiao +4 more
Text-attributed graphs (TAGs) enhance graph learning by integrating rich textual semantics and topological context for each node. While boosting...
Attack HIGH
Yasamin Medghalchi, Milad Yazdani, Amirhossein Dabiriaghdam +7 more
Ultrasound is widely used in clinical practice due to its portability, cost-effectiveness, safety, and real-time imaging capabilities. However, image...
Survey HIGH
Shouqiao Wang, Marcello Politi, Samuele Marro +1 more
As agentic systems move into real-world deployments, their decisions increasingly depend on external inputs such as retrieved content, tool outputs,...
Attack HIGH
Matta Varun, Ajay Kumar Dhakar, Yuan Hong +1 more
Graph neural network (GNN) is a powerful tool for analyzing graph-structured data. However, their vulnerability to adversarial attacks raises serious...
1 months ago cs.LG cs.CR
PDF
Benchmark HIGH
Sen Fang, Weiyuan Ding, Zhezhen Cao +2 more
Large Language Models (LLMs) are increasingly adopted for vulnerability detection, yet their reasoning remains fundamentally unsound. We identify a...
1 months ago cs.SE cs.AI cs.CR
PDF
Attack HIGH
Yusheng Zheng, Yiwei Yang, Wei Zhang +1 more
LLM agent frameworks increasingly offer checkpoint-restore for error recovery and exploration, advising developers to make external tool calls safe...
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial