Defense HIGH
Samal Mukhtar, Yinghua Yao, Zhu Sun +3 more
Software vulnerability detection (SVD) is a critical challenge in modern systems. Large language models (LLMs) offer natural-language explanations...
3 months ago cs.SE cs.AI cs.CR
PDF
Defense MEDIUM
Enrico Ahlers, Daniel Passon, Yannic Noller +1 more
Machine learning models are increasingly present in our everyday lives; as a result, they become targets of adversarial attackers seeking to...
3 months ago cs.LG cs.AI cs.CR
PDF
Benchmark MEDIUM
Yuxin Cao, Wei Song, Shangzhi Xu +2 more
Video Large Language Models (VideoLLMs) have recently achieved strong performance in video understanding tasks. However, we identify a previously...
3 months ago cs.CV cs.CR cs.MM
PDF
Attack HIGH
Shuyu Chang, Haiping Huang, Yanjun Zhang +3 more
Code models are increasingly adopted in software development but remain vulnerable to backdoor attacks via poisoned training data. Existing backdoor...
3 months ago cs.CR cs.SE
PDF
Other LOW
Zhibin Duan, Guowei Rong, Zhuo Li +3 more
Reward models learned from human preferences are central to aligning large language models (LLMs) via reinforcement learning from human feedback, yet...
3 months ago cs.LG cs.AI
PDF
Defense MEDIUM
Zijing Xu, Ziwei Ning, Tiancheng Hu +4 more
The rapid evolution of cyber threats has highlighted significant gaps in security knowledge integration. Cybersecurity Knowledge Graphs (CKGs)...
Attack HIGH
Qianli Wang, Boyang Ma, Minghui Xu +1 more
LLM agents often rely on Skills to describe available tools and recommended procedures. We study a hidden-comment prompt injection risk in this...
Survey MEDIUM
Viet Hoang Luu, Amirmohammad Pasdar, Wachiraphan Charoenwet +3 more
Modern fuzzers scale to large, real-world software but often fail to exercise the program states developers consider most fragile or...
3 months ago cs.CR cs.SE
PDF
Benchmark MEDIUM
Mohan Rajagopalan, Vinay Rao
Large Language Model (LLM) applications are vulnerable to prompt injection and context manipulation attacks that traditional security models cannot...
3 months ago cs.CR cs.AI cs.MA
PDF
Survey MEDIUM
Ashwath Vaithinathan Aravindan, Mayank Kejriwal
Chain-of-Thought (CoT) prompting has emerged as a foundational technique for eliciting reasoning from Large Language Models (LLMs), yet the...
3 months ago cs.CL cs.AI cs.LG
PDF
Survey HIGH
Peiran Wang, Xinfeng Li, Chong Xiang +5 more
The evolution of Large Language Models (LLMs) has resulted in a paradigm shift towards autonomous agents, necessitating robust security against...
3 months ago cs.CR cs.CL
PDF
Benchmark LOW
Yilin Yang, Zhenghui Guo, Yuke Wang +3 more
Large Vision-Language Models (VLMs) have achieved remarkable success across diverse multimodal tasks but remain vulnerable to hallucinations rooted...
Defense MEDIUM
Weichen Yu, Ravi Mangal, Yinyi Luo +4 more
Large Language Models are rapidly becoming core components of modern software development workflows, yet ensuring code security remains challenging....
3 months ago cs.CR cs.SE
PDF
Attack HIGH
Tri Nguyen, Huy Hoang Bao Le, Lohith Srikanth Pentapalli +2 more
Detecting jailbreak attempts in clinical training large language models (LLMs) requires accurate modeling of linguistic deviations that signal unsafe...
3 months ago cs.AI cs.LG
PDF
Benchmark HIGH
Adriana Alvarado Garcia, Ruyuan Wan, Ozioma C. Oguine +1 more
Recently, red teaming, with roots in security, has become a key evaluative approach to ensure the safety and reliability of Generative Artificial...
3 months ago cs.CY cs.AI cs.CL
PDF
Survey HIGH
George Tsigkourakos, Constantinos Patsakis
Static Application Security Testing (SAST) tools are integral to modern DevSecOps pipelines, yet tools like CodeQL, Semgrep, and SonarQube remain...
Defense LOW
Jayesh Choudhari, Piyush Kumar Singh
Domain fine-tuning is a common path to deploy small instruction-tuned language models as customer-support assistants, yet its effects on...
3 months ago cs.CR cs.LG
PDF
Tool HIGH
Hayfa Dhabhi, Kashyap Thimmaraju
Large Language Models (LLMs) deploy safety mechanisms to prevent harmful outputs, yet these defenses remain vulnerable to adversarial prompts. While...
3 months ago cs.CR cs.AI cs.CY
PDF
Defense MEDIUM
Kun Wang, Zherui Li, Zhenhong Zhou +8 more
Omni-modal Large Language Models (OLLMs) greatly expand LLMs' multimodal capabilities but also introduce cross-modal safety risks. However, a...
3 months ago cs.CR cs.AI cs.CL
PDF
Attack MEDIUM
Zhenyu Xu, Victor S. Sheng
Protecting the intellectual property of large language models (LLMs) is a critical challenge due to the proliferation of unauthorized derivative...
3 months ago cs.CR cs.AI
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial