Defense MEDIUM
William Pan, Guiran Liu, Binrong Zhu +4 more
The rapid expansion of IoT deployments has intensified cybersecurity threats, notably Distributed Denial of Service (DDoS) attacks, characterized by...
3 months ago cs.CR eess.SY
PDF
Defense MEDIUM
Anudeex Shetty, Aditya Joshi, Salil S. Kanhere
Humans are susceptible to undesirable behaviours and privacy leaks under the influence of alcohol. This paper investigates drunk language, i.e., text...
3 months ago cs.CL cs.AI cs.CR
PDF
Defense LOW
Xiaofeng Luo, Jiayi He, Jiawen Kang +4 more
The emergence of 6G-enabled vehicular metaverses enables Autonomous Vehicles (AVs) to operate across physical and virtual spaces through...
3 months ago cs.NI cs.CR cs.HC
PDF
Defense HIGH
Jonah Ghebremichael, Saastha Vasan, Saad Ullah +6 more
Static Application Security Testing (SAST) tools using taint analysis are widely viewed as providing higher-quality vulnerability detection results...
3 months ago cs.CR cs.SE
PDF
Defense HIGH
Hao Wang, Yanting Wang, Hao Li +2 more
Large Language Models (LLMs) have achieved remarkable capabilities but remain vulnerable to adversarial ``jailbreak'' attacks designed to bypass...
3 months ago cs.CR cs.CL
PDF
Defense LOW
Xingjun Ma, Yixu Wang, Hengyuan Xu +18 more
The rapid evolution of Large Language Models (LLMs) and Multimodal Large Language Models (MLLMs) has driven major gains in reasoning, perception, and...
3 months ago cs.AI cs.CL cs.CV
PDF
Defense MEDIUM
Jiawen Zhang, Yangfan Hu, Kejia Chen +7 more
Fine-tuning is an essential and pervasive functionality for applying large language models (LLMs) to downstream tasks. However, it has the potential...
3 months ago cs.LG cs.AI
PDF
Defense MEDIUM
Caitlin A. Stamatis, Jonah Meyerhoff, Richard Zhang +3 more
Large language models (LLMs) are increasingly used for mental health support, yet existing safety evaluations rely primarily on small,...
3 months ago cs.CY cs.CL
PDF
Defense MEDIUM
Zhenhua Xu, Yiran Zhao, Mengting Zhong +4 more
The rapid growth of large language models raises pressing concerns about intellectual property protection under black-box deployment. Existing...
4 months ago cs.CR cs.AI
PDF
Defense LOW
Zhichen Zeng, Wenxuan Bao, Xiao Lin +8 more
Vision-language models (VLMs), despite their extraordinary zero-shot capabilities, are vulnerable to distribution shifts. Test-time adaptation (TTA)...
4 months ago cs.CV cs.AI
PDF
Defense MEDIUM
Mingxiang Tao, Yu Tian, Wenxuan Tu +3 more
Federated learning (FL) addresses data privacy and silo issues in large language models (LLMs). Most prior work focuses on improving the training...
4 months ago cs.CR cs.AI
PDF
Defense LOW
Kaiwen Zhou, Shreedhar Jangam, Ashwin Nagarajan +7 more
Large language model-based agents are rapidly evolving from simple conversational assistants into autonomous systems capable of performing complex,...
Defense MEDIUM
Imtiaz Ali Soomro, Hamood Ur Rehman, S. Jawad Hussain ID +3 more
The rapid proliferation of Internet of Things (IoT) devices across domains such as smart homes, industrial control systems, and healthcare networks...
4 months ago cs.CR cs.NI
PDF
Defense MEDIUM
Qingyuan Li, Chenchen Yu, Chuanyi Li +4 more
Vulnerabilities severely threaten software systems, making the timely application of security patches crucial for mitigating attacks. However,...
4 months ago cs.SE cs.CR
PDF
Defense MEDIUM
G M Shahariar, Zabir Al Nazi, Md Olid Hasan Bhuiyan +1 more
Vision Language Models (VLMs) are increasingly integrated into privacy-critical domains, yet existing evaluations of personally identifiable...
4 months ago cs.AI cs.CL cs.CR
PDF
Defense LOW
Jua Han, Jaeyoon Seo, Jungbin Min +2 more
One mistake by an AI system in a safety-critical setting can cost lives. As Large Language Models (LLMs) become integral to robotics decision-making,...
4 months ago cs.AI cs.RO
PDF
Defense LOW
Ilmo Sung
Large language models suffer from "hallucinations"-logical inconsistencies induced by semantic noise. We propose that current architectures operate...
4 months ago cs.LG cond-mat.dis-nn cs.AI
PDF
Defense MEDIUM
Han Zhu, Jiale Chen, Chengkun Cai +8 more
Multi-modal Large Language Models (MLLMs) are increasingly deployed in interactive applications. However, their safety vulnerabilities become...
Defense MEDIUM
Xing Li, Hui-Ling Zhen, Lihao Yin +3 more
This paper presents a comprehensive empirical study on the safety alignment capabilities. We evaluate what matters for safety alignment in LLMs and...
4 months ago cs.CL cs.AI cs.CR
PDF
Defense MEDIUM
Di Wu, Yanyan Zhao, Xin Lu +2 more
Defending against jailbreak attacks is crucial for the safe deployment of Large Language Models (LLMs). Recent research has attempted to improve...
4 months ago cs.AI cs.CL
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial