Benchmark LOW
Mingqiao Mo, Yunlong Tan, Hao Zhang +2 more
Large language models (LLMs) have achieved remarkable progress in code generation, yet their potential for software protection remains largely...
Attack HIGH
Xingwei Lin, Wenhao Lin, Sicong Cao +4 more
Multi-turn jailbreak attacks have emerged as a critical threat to Large Language Models (LLMs), bypassing safety mechanisms by progressively...
1 months ago cs.CR cs.AI
PDF
Attack MEDIUM
Yizhong Ding
Webshells remain a primary foothold for attackers to compromise servers, particularly within PHP ecosystems. However, existing detection mechanisms...
1 months ago cs.CR cs.AI
PDF
Defense MEDIUM
Holly Trikilis, Pasindu Marasinghe, Fariza Rashid +1 more
Phishing continues to be one of the most prevalent attack vectors, making accurate classification of phishing URLs essential. Recently, large...
1 months ago cs.CR cs.AI
PDF
Survey MEDIUM
Mohsen Hatami, Van Tuan Pham, Hozefa Lakadawala +1 more
The increasing integration of AI agents into cyber-physical systems (CPS) introduces new security risks that extend beyond traditional cyber or...
1 months ago cs.CR cs.DC
PDF
Attack HIGH
Yuetian Chen, Kaiyuan Zhang, Yuntao Du +5 more
Diffusion Language Models (DLMs) represent a promising alternative to autoregressive language models, using bidirectional masked token prediction....
1 months ago cs.LG cs.AI
PDF
Benchmark LOW
Faezeh Hosseini, Mohammadali Yousefzadeh, Yadollah Yaghoobzadeh
Figurative language, particularly fixed figurative expressions (FFEs) such as idioms and proverbs, poses persistent challenges for large language...
Attack HIGH
Md Tasnim Jawad, Mingyan Xiao, Yanzhao Wu
With the widespread adoption of Large Language Models (LLMs) and increasingly stringent privacy regulations, protecting data privacy in LLMs has...
Defense LOW
Pragatheeswaran Vipulanandan, Kamal Premaratne, Dilip Sarkar
Large language models (LLMs) exhibit strong generative capabilities but remain vulnerable to confabulations, fluent yet unreliable outputs that vary...
Benchmark MEDIUM
Bharath Krishnamurthy, Ajita Rattani
Morphing techniques generate artificial biometric samples that combine features from multiple individuals, allowing each contributor to be verified...
1 months ago cs.SD cs.CR cs.LG
PDF
Benchmark MEDIUM
Nourin Shahin, Izzat Alsmadi
As large language models (LLMs) move from research prototypes to enterprise systems, their security vulnerabilities pose serious risks to data...
1 months ago cs.CR cs.LG
PDF
Tool MEDIUM
Lige Huang, Zicheng Liu, Jie Zhang +3 more
The dual offensive and defensive utility of Large Language Models (LLMs) highlights a critical gap in AI security: the lack of unified frameworks for...
1 months ago cs.CR cs.AI cs.CL
PDF
Benchmark MEDIUM
Xiangyang Zhu, Yuan Tian, Zicheng Zhang +6 more
Large vision-language models (LVLMs) exhibit remarkable capabilities in cross-modal tasks but face significant safety challenges, which undermine...
Attack HIGH
Haonan Zhang, Dongxia Wang, Yi Liu +2 more
Safety-aligned LLMs suffer from two failure modes: jailbreak (answering harmful inputs) and over-refusal (declining benign queries). Existing vector...
1 months ago cs.LG cs.AI
PDF
Defense MEDIUM
Binyan Xu, Fan Yang, Xilin Dai +2 more
Deep Neural Networks remain inherently vulnerable to backdoor attacks. Traditional test-time defenses largely operate under the paradigm of internal...
1 months ago cs.LG cs.CR
PDF
Defense LOW
Eranga Bandara, Ross Gore, Sachin Shetty +9 more
6G networks are expected to be AI-native, intent-driven, and economically programmable, requiring fundamentally new approaches to network slice...
1 months ago cs.NI cs.AI
PDF
Defense LOW
Xingcheng Xu, Jingjing Qu, Qiaosheng Zhang +4 more
The rapid deployment of Large Language Models and AI agents across critical societal and technical domains is hindered by persistent behavioral...
1 months ago cs.AI cs.CL cs.LG
PDF
Benchmark MEDIUM
Quy-Anh Dang, Chris Ngo
Despite significant progress in alignment, large language models (LLMs) remain vulnerable to adversarial attacks that elicit harmful behaviors....
1 months ago cs.LG cs.AI
PDF
Benchmark MEDIUM
Yuxiang Wang, Hongyu Liu, Dekun Chen +2 more
As Speech Language Models (SLMs) transition from personal devices to shared, multi-user environments such as smart homes, a new challenge emerges:...
1 months ago eess.AS cs.AI cs.SD
PDF
Attack MEDIUM
Yangyang Guo, Ziwei Xu, Si Liu +2 more
This study reveals a previously unexplored vulnerability in the safety alignment of Large Language Models (LLMs). Existing aligned LLMs predominantly...
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial