Attack HIGH
Ahmad Mohammad Saber, Saeed Jafari, Zhengmao Ouyang +3 more
This paper presents a large language model (LLM)-based framework that adapts and fine-tunes compact LLMs for detecting cyberattacks on transformer...
4 months ago cs.CR cs.LG eess.SP
PDF
Attack HIGH
Iago Alves Brito, Walcy Santos Rezende Rios, Julia Soares Dollis +2 more
Current safety evaluations of large language models (LLMs) create a dangerous illusion of universality, aggregating "Identity Hate" into scalar...
4 months ago cs.CL cs.AI
PDF
Attack HIGH
Yu Yan, Sheng Sun, Mingfeng Li +6 more
Recently, people have suffered from LLM hallucination and have become increasingly aware of the reliability gap of LLMs in open and...
Attack HIGH
Siyuan Li, Xi Lin, Jun Wu +5 more
Jailbreak attacks pose significant threats to large language models (LLMs), enabling attackers to bypass safeguards. However, existing reactive...
4 months ago cs.CR cs.AI
PDF
Attack HIGH
Ji Guo, Wenbo Jiang, Yansong Lin +7 more
Vision-Language-Action (VLA) models are widely deployed in safety-critical embodied AI applications such as robotics. However, their complex...
4 months ago cs.CR cs.LG
PDF
Attack HIGH
Hang Fu, Wanli Peng, Yinghan Zhou +3 more
The widespread adoption of Large Language Model (LLM) in commercial and research settings has intensified the need for robust intellectual property...
Attack HIGH
Binh Nguyen, Thai Le
Audio Language Models (ALMs) offer a promising shift towards explainable audio deepfake detections (ADDs), moving beyond \textit{black-box}...
4 months ago cs.CL cs.SD eess.AS
PDF
Attack HIGH
Xiao Lin, Philip Li, Zhichen Zeng +6 more
Despite rich safety alignment strategies, large language models (LLMs) remain highly susceptible to jailbreak attacks, which compromise safety...
4 months ago cs.LG cs.AI cs.IR
PDF
Attack HIGH
Zhakshylyk Nurlanov, Frank R. Schmidt, Florian Bernard
As Large Language Models (LLMs) are increasingly deployed in safety-critical domains, rigorously evaluating their robustness against adversarial...
4 months ago cs.LG cs.AI cs.CR
PDF
Attack HIGH
Xi Wang, Songlei Jian, Shasha Li +5 more
Despite extensive safety alignment, Large Language Models (LLMs) often fail against jailbreak attacks. While machine unlearning has emerged as a...
4 months ago cs.CR cs.AI
PDF
Attack HIGH
Yuetian Chen, Yuntao Du, Kaiyuan Zhang +4 more
Most membership inference attacks (MIAs) against Large Language Models (LLMs) rely on global signals, like average loss, to identify training data....
4 months ago cs.CL cs.AI cs.CR
PDF
Attack HIGH
Dinghong Song, Zhiwei Xu, Hai Wan +3 more
Model quantization is critical for deploying large language models (LLMs) on resource-constrained hardware, yet recent work has revealed severe...
4 months ago cs.CR cs.LG
PDF
Attack HIGH
Scott Thornton
Large language models remain vulnerable to jailbreak attacks, and single-layer defenses often trade security for usability. We present TRYLOCK, the...
4 months ago cs.CR cs.LG
PDF
Attack HIGH
Devang Kulshreshtha, Hang Su, Chinmay Hegde +1 more
Most jailbreak methods achieve high attack success rates (ASR) but require attacker LLMs to craft adversarial queries and/or demand high query...
Attack HIGH
Alexandre Le Mercier, Chris Develder, Thomas Demeester
State space models (SSMs) like Mamba offer efficient alternatives to Transformer-based language models, with linear time complexity. Yet, their...
Attack HIGH
Alexandre Le Mercier, Chris Develder, Thomas Demeester
State space models (SSMs) like Mamba offer efficient alternatives to Transformer-based language models, with linear time complexity. Yet, their...
Attack HIGH
M P V S Gopinadh, S Mahaboob Hussain
Large Language Models (LLMs) are integral to modern AI applications, but their safety alignment mechanisms can be bypassed through adversarial prompt...
4 months ago cs.CR cs.AI
PDF
Attack HIGH
Md Mahbub Hasan, Marcus Sternhagen, Krishna Chandra Roy
Additive manufacturing (AM) is rapidly integrating into critical sectors such as aerospace, automotive, and healthcare. However, this cyber-physical...
4 months ago cs.CR cs.AI cs.LG
PDF
Attack HIGH
Haoran Gu, Handing Wang, Yi Mei +2 more
The widespread deployment of large language models (LLMs) has raised growing concerns about their misuse risks and associated safety issues. While...
4 months ago cs.CR cs.CL
PDF
Attack HIGH
Manish Bhatt, Adrian Wood, Idan Habler +1 more
Production LLM agents with tool-using capabilities require security testing despite their safety training. We adapt Go-Explore to evaluate...
4 months ago cs.CR cs.AI cs.LG
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial