Attack HIGH
Haonan Zhang, Dongxia Wang, Yi Liu +2 more
Safety-aligned LLMs suffer from two failure modes: jailbreak (answering harmful inputs) and over-refusal (declining benign queries). Existing vector...
1 months ago cs.LG cs.AI
PDF
Defense MEDIUM
Binyan Xu, Fan Yang, Xilin Dai +2 more
Deep Neural Networks remain inherently vulnerable to backdoor attacks. Traditional test-time defenses largely operate under the paradigm of internal...
1 months ago cs.LG cs.CR
PDF
Defense LOW
Eranga Bandara, Ross Gore, Sachin Shetty +9 more
6G networks are expected to be AI-native, intent-driven, and economically programmable, requiring fundamentally new approaches to network slice...
1 months ago cs.NI cs.AI
PDF
Defense LOW
Xingcheng Xu, Jingjing Qu, Qiaosheng Zhang +4 more
The rapid deployment of Large Language Models and AI agents across critical societal and technical domains is hindered by persistent behavioral...
1 months ago cs.AI cs.CL cs.LG
PDF
Benchmark MEDIUM
Quy-Anh Dang, Chris Ngo
Despite significant progress in alignment, large language models (LLMs) remain vulnerable to adversarial attacks that elicit harmful behaviors....
1 months ago cs.LG cs.AI
PDF
Benchmark MEDIUM
Yuxiang Wang, Hongyu Liu, Dekun Chen +2 more
As Speech Language Models (SLMs) transition from personal devices to shared, multi-user environments such as smart homes, a new challenge emerges:...
1 months ago eess.AS cs.AI cs.SD
PDF
Attack MEDIUM
Yangyang Guo, Ziwei Xu, Si Liu +2 more
This study reveals a previously unexplored vulnerability in the safety alignment of Large Language Models (LLMs). Existing aligned LLMs predominantly...
Attack MEDIUM
Sen Nie, Jie Zhang, Zhuo Wang +2 more
Vision-language models (VLMs) such as CLIP have demonstrated remarkable zero-shot generalization, yet remain highly vulnerable to adversarial...
Benchmark LOW
Chi Zhang, Wenxuan Ding, Jiale Liu +3 more
Vision-Language Models (VLMs) have shown strong multimodal reasoning capabilities on Visual-Question-Answering (VQA) benchmarks. However, their...
Tool HIGH
Nirhoshan Sivaroopan, Kanchana Thilakarathna, Albert Zomaya +6 more
Sponge attacks increasingly threaten LLM systems by inducing excessive computation and DoS. Existing defenses either rely on statistical filters that...
1 months ago cs.CR cs.AI
PDF
Survey MEDIUM
Wachiraphan Charoenwet, Kla Tantithamthavorn, Patanamon Thongtanunam +3 more
Secure code review is critical at the pre-commit stage, where vulnerabilities must be caught early under tight latency and limited-context...
1 months ago cs.CR cs.AI cs.LG
PDF
Tool MEDIUM
Satyapriya Krishna, Matteo Memelli, Tong Wang +5 more
Amazon published its Frontier Model Safety Framework (FMSF) as part of the Paris AI summit, following which we presented a report on Amazon's Premier...
1 months ago cs.CR cs.SE
PDF
Attack HIGH
Harsh Chaudhari, Ethan Rathbun, Hanna Foerster +5 more
Chain-of-Thought (CoT) reasoning has emerged as a powerful technique for enhancing large language models' capabilities by generating intermediate...
1 months ago cs.CR cs.LG
PDF
Defense MEDIUM
Henry Chen, Victor Aranda, Samarth Keshari +2 more
Prompt-based attack techniques are one of the primary challenges in securely deploying and protecting LLM-based AI systems. LLM inputs are an...
Benchmark MEDIUM
Zahra Hashemi, Zhiqiang Zhong, Jun Pang +1 more
The rapid evolution of large language models (LLMs) has fuelled enthusiasm about their role in advancing scientific discovery, with studies exploring...
Benchmark MEDIUM
Mohamed Amine Ferrag, Abderrahmane Lakas, Merouane Debbah
Autonomous unmanned aerial vehicle (UAV) systems are increasingly deployed in safety-critical, networked environments where they must operate...
2 months ago cs.CR cs.AI
PDF
Tool LOW
Dongrui Liu, Qihan Ren, Chen Qian +40 more
The rise of AI agents introduces complex safety and security challenges arising from autonomous tool use and environmental interactions. Current...
2 months ago cs.AI cs.CC cs.CL
PDF
Defense HIGH
Zihan Wu, Jie Xu, Yun Peng +2 more
Large Language Models (LLMs) struggle to automate real-world vulnerability detection due to two key limitations: the heterogeneity of vulnerability...
2 months ago cs.SE cs.AI
PDF
Attack HIGH
Gabriel Lee Jun Rong, Christos Korgialas, Dion Jia Xu Ho +3 more
Existing automated attack suites operate as static ensembles with fixed sequences, lacking strategic adaptation and semantic awareness. This paper...
Benchmark MEDIUM
Geunsik Lim
As climate-related hazards intensify, conventional early warning systems (EWS) disseminate alerts rapidly but often fail to trigger timely protective...
2 months ago cs.AI cs.SI eess.SY
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial