Benchmark HIGH
Iakovos-Christos Zarkadis, Christos Douligeris
Supervised detection of network attacks has always been a critical part of network intrusion detection systems (NIDS). Nowadays, in a pivotal time...
1 months ago cs.CR cs.AI stat.AP
PDF
Attack HIGH
Kun Wang, Meng Chen, Junhao Wang +6 more
With the widespread deployment of deep-learning-based speech models in security-critical applications, backdoor attacks have emerged as a serious...
1 months ago cs.CR cs.LG cs.SD
PDF
Attack HIGH
Zhihua Wei, Qiang Li, Jian Ruan +4 more
Large vision-language models (VLMs) often exhibit weakened safety alignment with the integration of the visual modality. Even when text prompts...
1 months ago cs.CV cs.AI
PDF
Attack HIGH
Hammad Atta, Ken Huang, Kyriakos Rock Lambros +11 more
Agentic LLM systems equipped with persistent memory, RAG pipelines, and external tool connectors face a class of attacks - Logic-layer Prompt Control...
Attack HIGH
Shenao Yan, Shimaa Ahmed, Shan Jin +4 more
Code generation large language models (LLMs) are increasingly integrated into modern software development workflows. Recent work has shown that these...
1 months ago cs.CR cs.AI cs.SE
PDF
Attack HIGH
Yong Zou, Haoran Li, Fanxiao Li +5 more
Recent progress in image generation models (IGMs) enables high-fidelity content creation but also amplifies risks, including the reproduction of...
1 months ago cs.CV cs.AI cs.CR
PDF
Attack HIGH
Guangsheng Zhang, Huan Tian, Leo Zhang +4 more
Semantic segmentation models are widely deployed in safety-critical applications such as autonomous driving, yet their vulnerability to backdoor...
Attack HIGH
Deng Liu, Song Chen
Hardware faults, specifically bit-flips in quantized weights, pose a severe reliability threat to Large Language Models (LLMs), often triggering...
Attack HIGH
Xiaobing Sun, Perry Lam, Shaohua Li +4 more
Modern LLMs employ safety mechanisms that extend beyond surface-level input filtering to latent semantic representations and generation-time...
Tool HIGH
Yihao Zhang, Zeming Wei, Xiaokun Luan +7 more
Autonomous LLM-based agents increasingly operate as long-running processes forming densely interconnected multi-agent ecosystems, whose security...
1 months ago cs.CR cs.AI cs.LG
PDF
Tool HIGH
Yihao Zhang, Zeming Wei, Xiaokun Luan +7 more
Autonomous LLM-based agents increasingly operate as long-running processes forming densely interconnected multi-agent ecosystems, whose security...
1 months ago cs.CR cs.AI cs.LG
PDF
Attack HIGH
Mateusz Dziemian, Maxwell Lin, Xiaohan Fu +28 more
LLM based agents are increasingly deployed in high stakes settings where they process external data sources such as emails, documents, and code...
1 months ago cs.CR cs.AI
PDF
Attack HIGH
Zhenlin Xu, Xiaogang Zhu, Yu Yao +2 more
Modern agentic systems allow Large Language Model (LLM) agents to tackle complex tasks through extensive tool usage, forming structured control flows...
Benchmark HIGH
Lidor Erez, Omer Hofman, Tamir Nizri +1 more
Automated LLM vulnerability scanners are increasingly used to assess security risks by measuring different attack type success rates (ASR). Yet the...
1 months ago cs.CR cs.PF
PDF
Attack HIGH
Maël Jenny, Jérémie Dentan, Sonia Vanier +1 more
Most jailbreak techniques for Large Language Models (LLMs) primarily rely on prompt modifications, including paraphrasing, obfuscation, or...
Attack HIGH
Chongxin Li, Hanzhang Wang, Lian Duan
Safety prompts constitute an interpretable layer of defense against jailbreak attacks in vision-language models (VLMs); however, their efficacy is...
Attack HIGH
Yiling Tao, Xinran Zheng, Shuo Yang +2 more
While large language model-based agents demonstrate great potential in collaborative tasks, their interactivity also introduces security...
Attack HIGH
Zijian Ling, Pingyi Hu, Xiuyong Gao +6 more
Speech-driven large language models (LLMs) are increasingly accessed through speech interfaces, introducing new security risks via open acoustic...
1 months ago cs.CR cs.AI cs.SD
PDF
Attack HIGH
Chenlong Yin, Runpeng Geng, Yanting Wang +1 more
Prompt injection poses serious security risks to real-world LLM applications, particularly autonomous agents. Although many defenses have been...
2 months ago cs.LG cs.CR
PDF
Attack HIGH
Zheng Gao, Yifan Yang, Xiaoyu Li +4 more
Watermarking the initial noise of diffusion models has emerged as a promising approach for image provenance, but content-independent noise patterns...
2 months ago cs.CV cs.CR cs.LG
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial