Defense MEDIUM
Daniyal Ganiuly, Nurzhau Bolatbek
The increasing virtualization of fifth generation (5G) networks expands the attack surface of the user plane, making spoofing a persistent threat to...
4 months ago cs.CR cs.NI
PDF
Benchmark LOW
Jiarui Liu, Kaustubh Dhole, Yingheng Wang +7 more
Deductive reasoning is the process of deriving conclusions strictly from the given premises, without relying on external knowledge. We define honesty...
Attack LOW
Xin Zhao, Xiaojun Chen, Bingshan Liu +3 more
Generative vision-language models like Stable Diffusion demonstrate remarkable capabilities in creative media synthesis, but they also pose...
4 months ago cs.AI cs.CR cs.CV
PDF
Benchmark MEDIUM
Zexu Wang, Jiachi Chen, Zewei Lin +7 more
Smart contracts have significantly advanced blockchain technology, and digital signatures are crucial for reliable verification of contract...
4 months ago cs.CR cs.SE
PDF
Attack HIGH
Shigeki Kusaka, Keita Saito, Mikoto Kudo +3 more
Large language models (LLMs) are increasingly deployed in real-world systems, making it critical to understand their vulnerabilities. While data...
4 months ago cs.LG cs.AI
PDF
Attack HIGH
Hongyi Li, Chengxuan Zhou, Chu Wang +5 more
Large Audio-language Models (LAMs) have recently enabled powerful speech-based interactions by coupling audio encoders with Large Language Models...
Benchmark LOW
Shengbo Wang, Hong Sun, Ke Li
Interactive preference elicitation (IPE) aims to substantially reduce human effort while acquiring human preferences in wide personalization systems....
Benchmark MEDIUM
Yunfei Yang, Xiaojun Chen, Yuexin Xuan +3 more
Model watermarking techniques can embed watermark information into the protected model for ownership declaration by constructing specific...
4 months ago cs.CR cs.LG
PDF
Benchmark MEDIUM
Kazuki Iwahana, Yusuke Yamasaki, Akira Ito +2 more
Backdoor attacks pose a critical threat to machine learning models, causing them to behave normally on clean data but misclassify poisoned data into...
4 months ago cs.LG cs.CR
PDF
Attack MEDIUM
Zixun Xiong, Gaoyi Wu, Qingyang Yu +5 more
Given the high cost of large language model (LLM) training from scratch, safeguarding LLM intellectual property (IP) has become increasingly crucial....
4 months ago cs.CR cs.AI
PDF
Other LOW
Jiahang He, Rishi Ramachandran, Neel Ramachandran +5 more
As large language models (LLMs) are adopted in an increasingly wide range of applications, user-model interactions have grown in both frequency and...
Attack HIGH
Tiago Machado, Maysa Malfiza Garcia de Macedo, Rogerio Abreu de Paula +5 more
This work aims to investigate how different Large Language Models (LLMs) alignment methods affect the models' responses to prompt attacks. We...
Defense LOW
Huzaifa Arif, Keerthiram Murugesan, Ching-Yun Ko +3 more
We propose patching for large language models (LLMs) like software versions, a lightweight and modular approach for addressing safety...
Attack MEDIUM
Giorgio Piras, Raffaele Mura, Fabio Brau +3 more
Refusal refers to the functional behavior enabling safety-aligned language models to reject harmful or unethical prompts. Following the growing...
4 months ago cs.AI cs.LG
PDF
Attack HIGH
Yuxuan Zhou, Yuzhao Peng, Yang Bai +7 more
Large Vision-Language Models (VLMs) are susceptible to jailbreak attacks: researchers have developed a variety of attack strategies that can...
Benchmark MEDIUM
Junxiao Han, Zheng Yu, Lingfeng Bao +5 more
The widespread adoption of open-source software (OSS) has accelerated software innovation but also increased security risks due to the rapid...
4 months ago cs.CR cs.SE
PDF
Benchmark HIGH
Zhishen Sun, Guang Dai, Haishan Ye
LLMs demonstrate performance comparable to human abilities in complex tasks such as mathematical reasoning, but their robustness in mathematical...
Attack LOW
Ke Jia, Yuheng Ma, Yang Li +1 more
We revisit the problem of generating synthetic data under differential privacy. To address the core limitations of marginal-based methods, we propose...
4 months ago stat.ML cs.CR cs.LG
PDF
Attack HIGH
Yaxin Xiao, Qingqing Ye, Zi Liang +4 more
Machine learning models constitute valuable intellectual property, yet remain vulnerable to model extraction attacks (MEA), where adversaries...
4 months ago cs.CR cs.CV cs.LG
PDF
Attack HIGH
Xingyu Li, Xiaolei Liu, Cheng Liu +4 more
As large language models (LLMs) scale, their inference incurs substantial computational resources, exposing them to energy-latency attacks, where...
4 months ago cs.CR cs.AI cs.CL
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial