Attack HIGH
Yonghong Deng, Zhen Yang, Ping Jian +3 more
With the rapid advancement of large language models (LLMs), the safety of LLMs has become a critical concern. Despite significant efforts in safety...
2 months ago cs.AI cs.LG
PDF
Attack MEDIUM
Eduard Hirsch, Kristina Raab, Tobias J. Bauer +1 more
IT systems are facing an increasing number of security threats, including advanced persistent attacks and future quantum-computing vulnerabilities....
2 months ago cs.CR cs.IR
PDF
Attack HIGH
Jialai Wang, Ya Wen, Zhongmou Liu +4 more
Targeted bit-flip attacks (BFAs) exploit hardware faults to manipulate model parameters, posing a significant security threat. While prior work...
2 months ago cs.CR cs.AI
PDF
Attack HIGH
Ondřej Lukáš, Jihoon Shin, Emilia Rivas +6 more
Autonomous offensive agents often fail to transfer beyond the networks on which they are trained. We isolate a minimal but fundamental shift --...
2 months ago cs.CR cs.LG
PDF
Attack MEDIUM
Donghwa Kang, Hojun Choe, Doohyun Kim +2 more
Deploying deep neural networks (DNNs) on edge devices exposes valuable intellectual property to model-stealing attacks. While TEE-shielded DNN...
Attack HIGH
Jinman Wu, Yi Xie, Shiqian Zhao +1 more
Currently, open-sourced large language models (OSLLMs) have demonstrated remarkable generative performance. However, as their structure and weights...
2 months ago cs.CR cs.AI
PDF
Attack MEDIUM
Anatoly Belikov, Ilya Fedotov
Large Language Models (LLMs) are increasingly served on shared accelerators where an adversary with read access to device memory can observe KV...
2 months ago cs.CR cs.LG
PDF
Attack HIGH
Yuanbo Li, Tianyang Xu, Cong Hu +3 more
The rapid progress of Multi-Modal Large Language Models (MLLMs) has significantly advanced downstream applications. However, this progress also...
Attack HIGH
Yuanbo Li, Tianyang Xu, Cong Hu +3 more
The rapid progress of Multi-Modal Large Language Models (MLLMs) has significantly advanced downstream applications. However, this progress also...
Attack MEDIUM
Geraldin Nanfack, Eugene Belilovsky, Elvis Dohmatob
Safety-aligned language models refuse harmful requests through learned refusal behaviors encoded in their internal representations. Recent...
2 months ago cs.LG cs.AI
PDF
Attack LOW
Cameron Bell, Timothy Johnston, Antoine Luciano +1 more
Theoretical and applied research into privacy encompasses an incredibly broad swathe of differing approaches, emphasis and aims. This work introduces...
2 months ago math.ST cs.CR cs.LG
PDF
Attack MEDIUM
Yizhe Xie, Congcong Zhu, Xinyue Zhang +5 more
Large Language Model-based Multi-Agent Systems (LLM-MAS) are increasingly applied to complex collaborative scenarios. However, their collaborative...
2 months ago cs.MA cs.AI
PDF
Attack HIGH
Junchen Li, Chao Qi, Rongzheng Wang +5 more
Retrieval-Augmented Generation (RAG) enhances the capabilities of large language models (LLMs) by incorporating external knowledge, but its reliance...
Attack HIGH
Wang Jian, Shen Hong, Ke Wei +1 more
While federated learning protects data privacy, it also makes the model update process vulnerable to long-term stealthy perturbations. Existing...
2 months ago cs.LG cs.AI cs.CR
PDF
Attack HIGH
Yangyang Wei, Yijie Xu, Zhenyuan Li +2 more
Multi-Agent System is emerging as the \textit{de facto} standard for complex task orchestration. However, its reliance on autonomous execution and...
2 months ago cs.CR cs.MA
PDF
Attack HIGH
Neha Nagaraja, Lan Zhang, Zhilong Wang +2 more
Multimodal Large Language Models (MLLMs) integrate vision and text to power applications, but this integration introduces new vulnerabilities. We...
2 months ago cs.CV cs.AI cs.CR
PDF
Attack MEDIUM
Achyutha Menon, Magnus Saebo, Tyler Crosse +3 more
The accelerating adoption of language models (LMs) as agents for deployment in long-context tasks motivates a thorough understanding of goal drift:...
Attack HIGH
Zhi Xu, Jiaqi Li, Xiaotong Zhang +2 more
Large language models (LLMs) have achieved remarkable success across diverse applications but remain vulnerable to jailbreak attacks, where attackers...
Attack MEDIUM
Edouard Lansiaux
Federated Learning (FL) enables collaborative training of medical AI models across hospitals without centralizing patient data. However, the exchange...
2 months ago cs.CR cs.AI
PDF
Attack HIGH
Peter Horvath, Ilia Shumailov, Lukasz Chmielewski +2 more
The multi-million dollar investment required for modern machine learning (ML) has made large ML models a prime target for theft. In response, the...
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial