Tool HIGH
Xiangwen Wang, Ananth Balashankar, Varun Chandrasekaran
Large language models remain vulnerable to jailbreak attacks, yet we still lack a systematic understanding of how jailbreak success scales with...
2 months ago cs.LG cs.CR
PDF
Survey HIGH
Fabrizio Dimino, Bhaskarjit Sarmah, Stefano Pasquali
The rapid adoption of large language models (LLMs) in financial services introduces new operational, regulatory, and security risks. Yet most...
2 months ago q-fin.CP cs.AI cs.CY
PDF
Tool HIGH
Yu He, Haozhe Zhu, Yiming Li +4 more
LLM agents are highly vulnerable to Indirect Prompt Injection (IPI), where adversaries embed malicious directives in untrusted tool outputs to hijack...
Attack HIGH
Nasim Soltani, Shayan Nejadshamsi, Zakaria Abou El Houda +4 more
Adversarial examples can represent a serious threat to machine learning (ML) algorithms. If used to manipulate the behaviour of ML-based Network...
2 months ago cs.CR cs.AI
PDF
Attack HIGH
Scott Thornton
Retrieval-Augmented Generation (RAG) systems extend large language models (LLMs) with external knowledge sources but introduce new attack surfaces...
2 months ago cs.CR cs.AI cs.LG
PDF
Attack HIGH
Nanzi Yang, Weiheng Bai, Kangjie Lu
The Model Context Protocol (MCP) is a recently proposed interoperability standard that unifies how AI agents connect with external tools and data...
2 months ago cs.CR cs.AI
PDF
Attack HIGH
Ailiya Borjigin, Igor Stadnyk, Ben Bilski +2 more
OpenClaw-style agent stacks turn language into privileged execution: LLM intents flow through tool interception, policy gates, and a local executor....
2 months ago cs.CR cs.AI
PDF
Attack HIGH
Fan Yang
The widespread adoption of thinking mode in large language models (LLMs) has significantly enhanced complex task processing capabilities while...
2 months ago cs.CR cs.AI
PDF
Attack HIGH
Quanchen Zou, Moyang Chen, Zonghao Ying +6 more
Large Vision-Language Models (LVLMs) undergo safety alignment to suppress harmful content. However, current defenses predominantly target explicit...
Attack HIGH
Pratyay Kumar, Abu Saleh Md Tayeen, Satyajayant Misra +4 more
Deep learning (DL)-based Network Intrusion Detection System (NIDS) has demonstrated great promise in detecting malicious network traffic. However,...
2 months ago cs.CR cs.AI
PDF
Attack HIGH
David Fernandez, Pedram MohajerAnsari, Amir Salarpour +3 more
Vision-language models are emerging for autonomous driving, yet their robustness to physical adversarial attacks remains unexplored. This paper...
Attack HIGH
Junxian Li, Tu Lan, Haozhen Tan +2 more
Modern vision-language-model (VLM) based graphical user interface (GUI) agents are expected not only to execute actions accurately but also to...
2 months ago cs.CR cs.CL cs.CV
PDF
Attack HIGH
Yonghong Deng, Zhen Yang, Ping Jian +3 more
With the rapid advancement of large language models (LLMs), the safety of LLMs has become a critical concern. Despite significant efforts in safety...
2 months ago cs.AI cs.LG
PDF
Attack HIGH
Jialai Wang, Ya Wen, Zhongmou Liu +4 more
Targeted bit-flip attacks (BFAs) exploit hardware faults to manipulate model parameters, posing a significant security threat. While prior work...
2 months ago cs.CR cs.AI
PDF
Attack HIGH
Ondřej Lukáš, Jihoon Shin, Emilia Rivas +6 more
Autonomous offensive agents often fail to transfer beyond the networks on which they are trained. We isolate a minimal but fundamental shift --...
2 months ago cs.CR cs.LG
PDF
Benchmark HIGH
Zheng Yu, Wenxuan Shi, Xinqian Sun +3 more
Automated Vulnerability Repair (AVR) systems, especially those leveraging large language models (LLMs), have demonstrated promising results in...
Benchmark HIGH
Zheng Yu, Wenxuan Shi, Xinqian Sun +3 more
Automated Vulnerability Repair (AVR) systems, especially those leveraging large language models (LLMs), have demonstrated promising results in...
Attack HIGH
Jinman Wu, Yi Xie, Shiqian Zhao +1 more
Currently, open-sourced large language models (OSLLMs) have demonstrated remarkable generative performance. However, as their structure and weights...
2 months ago cs.CR cs.AI
PDF
Tool HIGH
Touseef Hasan, Blessing Airehenbuwa, Nitin Pundir +2 more
Large language models (LLMs) have shown remarkable capabilities in natural language processing tasks, yet their application in hardware security...
2 months ago cs.CR cs.AI
PDF
Attack HIGH
Yuanbo Li, Tianyang Xu, Cong Hu +3 more
The rapid progress of Multi-Modal Large Language Models (MLLMs) has significantly advanced downstream applications. However, this progress also...
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial