Attack HIGH
Zhengchunmin Dai, Jiaxiong Tang, Peng Sun +2 more
In decentralized machine learning paradigms such as Split Federated Learning (SFL) and its variant U-shaped SFL, the server's capabilities are...
4 months ago cs.CR cs.AI cs.LG
PDF
Benchmark MEDIUM
Hongwei Liu, Junnan Liu, Shudong Liu +33 more
The rapid advancement of Large Language Models (LLMs) has led to performance saturation on many established benchmarks, questioning their ability to...
Attack HIGH
Eric Xue, Ruiyi Zhang, Pengtao Xie
Modern language models remain vulnerable to backdoor attacks via poisoned data, where training inputs containing a trigger are paired with a target...
4 months ago cs.CR cs.CL cs.LG
PDF
Defense MEDIUM
Zheyu Lin, Jirui Yang, Yukui Qiu +3 more
Evaluating the safety robustness of LLMs is critical for their deployment. However, mainstream Red Teaming methods rely on online generation and...
4 months ago cs.LG cs.CR
PDF
Benchmark LOW
Huiyi Chen, Jiawei Peng, Dehai Min +5 more
Evaluating the robustness of Large Vision-Language Models (LVLMs) is essential for their continued development and responsible deployment in...
Attack HIGH
Hajun Kim, Hyunsik Na, Daeseon Choi
As the use of large language models (LLMs) continues to expand, ensuring their safety and robustness has become a critical challenge. In particular,...
Attack HIGH
Ajesh Koyatan Chathoth, Stephen Lee
Sensor data-based recognition systems are widely used in various applications, such as gait-based authentication and human activity recognition...
4 months ago cs.CR cs.LG
PDF
Attack HIGH
Yule Liu, Heyi Zhang, Jinyi Zheng +6 more
Membership inference attacks (MIAs) on large language models (LLMs) pose significant privacy risks across various stages of model training. Recent...
4 months ago cs.CR cs.AI cs.CL
PDF
Defense MEDIUM
Quoc Viet Vo, Tashreque M. Haq, Paul Montague +3 more
Certified defenses promise provable robustness guarantees. We study the malicious exploitation of probabilistic certification frameworks to better...
4 months ago cs.LG cs.CR cs.CV
PDF
Tool HIGH
Badhan Chandra Das, Md Tasnim Jawad, Md Jueal Mia +2 more
Large Vision Language Models (LVLMs) demonstrate strong capabilities in multimodal reasoning and many real-world applications, such as visual...
Attack HIGH
Pascal Zimmer, Ghassan Karame
In this paper, we present the first detailed analysis of how training hyperparameters -- such as learning rate, weight decay, momentum, and batch...
4 months ago cs.LG cs.CR cs.CV
PDF
Tool HIGH
Siyang Cheng, Gaotian Liu, Rui Mei +7 more
The rapid adoption of large language models (LLMs) has brought both transformative applications and new security risks, including jailbreak attacks...
4 months ago cs.CR cs.AI cs.CL
PDF
Benchmark MEDIUM
Yuyang Xia, Ruixuan Liu, Li Xiong
Large language models (LLMs) perform in-context learning (ICL) by adapting to tasks from prompt demonstrations, which in practice often contain...
Attack MEDIUM
Fuyao Zhang, Jiaming Zhang, Che Wang +6 more
The reliance of mobile GUI agents on Multimodal Large Language Models (MLLMs) introduces a severe privacy vulnerability: screenshots containing...
Benchmark MEDIUM
Longfei Chen, Ruibin Yan, Taiyu Wong +2 more
Smart contracts are prone to vulnerabilities and are analyzed by experts as well as automated systems, such as static analysis and AI-assisted...
4 months ago cs.SE cs.CR
PDF
Benchmark LOW
Aishwarya Agarwal, Srikrishna Karanam, Vineet Gandhi
Contrastive vision-language models (VLMs) such as CLIP achieve strong zero-shot recognition yet remain vulnerable to spurious correlations,...
Benchmark MEDIUM
Minjie Wang, Jinguang Han, Weizhi Meng
In federated learning, multiple parties can cooperate to train the model without directly exchanging their own private data, but the gradient leakage...
4 months ago cs.CR cs.AI
PDF
Defense LOW
Mohammad Marufur Rahman, Guanchu Wang, Kaixiong Zhou +2 more
Catastrophic forgetting is a longstanding challenge in continual learning, where models lose knowledge from earlier tasks when learning new ones....
4 months ago cs.LG cs.AI
PDF
Attack MEDIUM
Ayush Chaudhary, Sisir Doppalpudi
The deployment of robust malware detection systems in big data environments requires careful consideration of both security effectiveness and...
4 months ago cs.CR cs.LG
PDF
Attack MEDIUM
Thomas Rivasseau
Current Large Language Model alignment research mostly focuses on improving model robustness against adversarial attacks and misbehavior by training...
4 months ago cs.CL cs.CR
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial