Attack HIGH
Meiwen Ding, Song Xia, Chenqi Kong +1 more
Although multimodal large language models (MLLMs) are increasingly deployed in real-world applications, their instruction-following behavior leaves...
1 months ago cs.CV cs.AI
PDF
Attack LOW
Yubo Cui, Xianchao Guan, Zijun Xiong +1 more
Pre-trained vision-language models (VLMs) exhibit strong zero-shot generalization but remain vulnerable to adversarial perturbations. Existing...
1 months ago cs.CV cs.AI cs.LG
PDF
Attack HIGH
Kavindu Herath, Joshua Zhao, Saurabh Bagchi
Backdoor attacks on federated learning (FL) are most often evaluated with synthetic corner patches or out-of-distribution (OOD) patterns that are...
1 months ago cs.CR cs.AI cs.CV
PDF
Attack HIGH
Yunrui Yu, Xuxiang Feng, Pengda Qin +5 more
Adversarial robustness evaluation faces a critical challenge as new defense paradigms emerge that can exploit limitations in existing assessment...
1 months ago cs.LG cs.CR
PDF
Attack HIGH
Bilgehan Sel, Xuanli He, Alwin Peng +2 more
Fine-tuning APIs offered by major AI providers create new attack surfaces where adversaries can bypass safety measures through targeted fine-tuning....
1 months ago cs.CR cs.AI cs.CL
PDF
Attack HIGH
Chihan Huang, Huaijin Wang, Shuai Wang
The pervasive deployment of deep learning models across critical domains has concurrently intensified privacy concerns due to their inherent...
1 months ago cs.LG cs.CR
PDF
Attack HIGH
Chengyin Hu, Jiaju Han, Xuemeng Sun +6 more
Vision-language models (VLMs) rely on a shared visual-textual representation space to perform tasks such as zero-shot classification, image...
Attack HIGH
Haochuan Kevin Wang
We present a stage-decomposed analysis of prompt injection attacks against five frontier LLM agents. Prior work measures task-level attack success...
1 months ago cs.CR cs.AI cs.LG
PDF
Attack MEDIUM
Ruiyang Wang, Rong Pan, Zhengan Yao
Federated learning (FL) enables distributed clients to collaboratively train a global model using local private data. Nevertheless, recent studies...
1 months ago cs.CR cs.AI cs.CV
PDF
Attack HIGH
Eyal Hadad, Mordechai Guri
On-device Vision-Language Models (VLMs) promise data privacy via local execution. However, we show that the architectural shift toward Dynamic...
1 months ago cs.CR cs.AI cs.LG
PDF
Attack HIGH
Younes Salmi, Hanna Bogucka
Deep learning (DL) has been widely studied for assisting applications of modern wireless communications. One of the applications is automatic...
Attack HIGH
Younes Salmi, Hanna Bogucka
Deep Learning (DL) has become a key technology that assists radio frequency (RF) signal classification applications, such as modulation...
Attack HIGH
Younes Salmi, Hanna Bogucka
This paper investigates the susceptibility to model integrity attacks that overload virtual machines assigned by the k-means algorithm used for...
1 months ago cs.CR cs.LG
PDF
Attack HIGH
Hieu Xuan Le, Benjamin Goh, Quy Anh Tang
Prompt attacks, including jailbreaks and prompt injections, pose a critical security risk to Large Language Model (LLM) systems. In production,...
Attack HIGH
Haozhen Wang, Haoyue Liu, Jionghao Zhu +3 more
Large Language Models (LLMs) have demonstrated remarkable performance across a wide range of applications. However, their practical deployment is...
1 months ago cs.CR cs.AI
PDF
Attack MEDIUM
Ahmed Lekssays
Large Language Models (LLMs) face critical challenges when analyzing security vulnerabilities in real world codebases: token limits prevent loading...
Attack HIGH
Alexander Panfilov, Peter Romov, Igor Shilov +3 more
LLM agents like Claude Code can not only write code but also be used for autonomous AI research and engineering \citep{rank2026posttrainbench,...
1 months ago cs.LG cs.AI cs.CR
PDF
Attack HIGH
Joseph G. Zalameda, Megan A. Witherow, Alexander M. Glandon +2 more
Machine learning models trained on small data sets for security applications are especially vulnerable to adversarial attacks. Person identification...
1 months ago cs.LG cs.CR cs.CV
PDF
Attack HIGH
Yulin Shen, Xudong Pan, Geng Hong +1 more
Recent advances in the Model Context Protocol (MCP) have enabled large language models (LLMs) to invoke external tools with unprecedented ease. This...
1 months ago cs.CR cs.AI
PDF
Attack HIGH
Qianlong Lan, Anuj Kaul
Deploying large language models (LLMs) as autonomous browser agents exposes a significant attack surface in the form of Indirect Prompt Injection...
1 months ago cs.CR cs.AI
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial