Attack HIGH
Ailiya Borjigin, Igor Stadnyk, Ben Bilski +2 more
OpenClaw-style agent stacks turn language into privileged execution: LLM intents flow through tool interception, policy gates, and a local executor....
2 months ago cs.CR cs.AI
PDF
Attack HIGH
Fan Yang
The widespread adoption of thinking mode in large language models (LLMs) has significantly enhanced complex task processing capabilities while...
2 months ago cs.CR cs.AI
PDF
Attack HIGH
Quanchen Zou, Moyang Chen, Zonghao Ying +6 more
Large Vision-Language Models (LVLMs) undergo safety alignment to suppress harmful content. However, current defenses predominantly target explicit...
Attack HIGH
Pratyay Kumar, Abu Saleh Md Tayeen, Satyajayant Misra +4 more
Deep learning (DL)-based Network Intrusion Detection System (NIDS) has demonstrated great promise in detecting malicious network traffic. However,...
2 months ago cs.CR cs.AI
PDF
Attack HIGH
David Fernandez, Pedram MohajerAnsari, Amir Salarpour +3 more
Vision-language models are emerging for autonomous driving, yet their robustness to physical adversarial attacks remains unexplored. This paper...
Attack HIGH
Junxian Li, Tu Lan, Haozhen Tan +2 more
Modern vision-language-model (VLM) based graphical user interface (GUI) agents are expected not only to execute actions accurately but also to...
2 months ago cs.CR cs.CL cs.CV
PDF
Attack HIGH
Yonghong Deng, Zhen Yang, Ping Jian +3 more
With the rapid advancement of large language models (LLMs), the safety of LLMs has become a critical concern. Despite significant efforts in safety...
2 months ago cs.AI cs.LG
PDF
Attack HIGH
Jialai Wang, Ya Wen, Zhongmou Liu +4 more
Targeted bit-flip attacks (BFAs) exploit hardware faults to manipulate model parameters, posing a significant security threat. While prior work...
2 months ago cs.CR cs.AI
PDF
Attack HIGH
Ondřej Lukáš, Jihoon Shin, Emilia Rivas +6 more
Autonomous offensive agents often fail to transfer beyond the networks on which they are trained. We isolate a minimal but fundamental shift --...
2 months ago cs.CR cs.LG
PDF
Attack HIGH
Jinman Wu, Yi Xie, Shiqian Zhao +1 more
Currently, open-sourced large language models (OSLLMs) have demonstrated remarkable generative performance. However, as their structure and weights...
2 months ago cs.CR cs.AI
PDF
Attack HIGH
Yuanbo Li, Tianyang Xu, Cong Hu +3 more
The rapid progress of Multi-Modal Large Language Models (MLLMs) has significantly advanced downstream applications. However, this progress also...
Attack HIGH
Yuanbo Li, Tianyang Xu, Cong Hu +3 more
The rapid progress of Multi-Modal Large Language Models (MLLMs) has significantly advanced downstream applications. However, this progress also...
Attack HIGH
Junchen Li, Chao Qi, Rongzheng Wang +5 more
Retrieval-Augmented Generation (RAG) enhances the capabilities of large language models (LLMs) by incorporating external knowledge, but its reliance...
Attack HIGH
Wang Jian, Shen Hong, Ke Wei +1 more
While federated learning protects data privacy, it also makes the model update process vulnerable to long-term stealthy perturbations. Existing...
2 months ago cs.LG cs.AI cs.CR
PDF
Attack HIGH
Yangyang Wei, Yijie Xu, Zhenyuan Li +2 more
Multi-Agent System is emerging as the \textit{de facto} standard for complex task orchestration. However, its reliance on autonomous execution and...
2 months ago cs.CR cs.MA
PDF
Attack HIGH
Neha Nagaraja, Lan Zhang, Zhilong Wang +2 more
Multimodal Large Language Models (MLLMs) integrate vision and text to power applications, but this integration introduces new vulnerabilities. We...
2 months ago cs.CV cs.AI cs.CR
PDF
Attack HIGH
Zhi Xu, Jiaqi Li, Xiaotong Zhang +2 more
Large language models (LLMs) have achieved remarkable success across diverse applications but remain vulnerable to jailbreak attacks, where attackers...
Attack HIGH
Peter Horvath, Ilia Shumailov, Lukasz Chmielewski +2 more
The multi-million dollar investment required for modern machine learning (ML) has made large ML models a prime target for theft. In response, the...
Attack HIGH
Jiayao Wang, Mohammad Maruf Hasan, Yiping Zhang +5 more
Self-Supervised Learning (SSL) has emerged as a significant paradigm in representation learning thanks to its ability to learn without extensive...
Attack HIGH
Huw Day, Adrianna Jezierska, Jessica Woodgate
Large Language Models have intensified the scale and strategic manipulation of political discourse on social media, leading to conflict escalation....
2 months ago cs.HC cs.AI
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial