Attack HIGH
Jiali Wei, Ming Fan, Guoheng Sun +3 more
The growing application of large language models (LLMs) in safety-critical domains has raised urgent concerns about their security. Many recent...
3 weeks ago cs.CR cs.AI cs.CL
PDF
Tool HIGH
Run Hao, Zhuoran Tan
Model Context Protocol (MCP) is increasingly adopted for tool-integrated LLM agents, but its multi-layer design and third-party server ecosystem...
Defense HIGH
Zhaohui Geoffrey Wang
Automated code vulnerability detection is critical for software security, yet existing approaches face a fundamental trade-off between detection...
3 weeks ago cs.CR cs.LG cs.SE
PDF
Attack HIGH
Guilin Deng, Silong Chen, Yuchuan Luo +6 more
Federated Large Language Models (FedLLMs) enable multiple parties to collaboratively fine-tune LLMs without sharing raw data, addressing challenges...
Attack HIGH
Jesse Zymet, Andy Luo, Swapnil Shinde +2 more
Many approaches to LLM red-teaming leverage an attacker LLM to discover jailbreaks against a target. Several of them task the attacker with...
3 weeks ago cs.CR cs.AI cs.CL
PDF
Attack HIGH
Yannis Belkhiter, Giulio Zizzo, Sergio Maffeis +2 more
The growth of agentic AI has drawn significant attention to function calling Large Language Models (LLMs), which are designed to extend the...
3 weeks ago cs.CR cs.AI cs.CL
PDF
Benchmark HIGH
Hanzhi Liu, Chaofan Shou, Xiaonan Liu +4 more
LLM agents have begun to find real security vulnerabilities that human auditors and automated fuzzers missed for decades, in source-available targets...
Attack HIGH
Nandakrishna Giri, Asmitha K. A., Serena Nicolazzo +2 more
Machine learning-based static malware detectors remain vulnerable to adversarial evasion techniques, such as metamorphic engine mutations. To address...
3 weeks ago cs.CR cs.LG
PDF
Attack HIGH
Pranav Pallerla, Wilson Naik Bhukya, Bharath Vemula +1 more
Retrieval-augmented generation (RAG) systems are increasingly deployed in sensitive domains such as healthcare and law, where they rely on private,...
3 weeks ago cs.CR cs.AI
PDF
Defense HIGH
Ronghao Ni, Mihai Christodorescu, Limin Jia
The rapidly evolving Node$.$js ecosystem currently includes millions of packages and is a critical part of modern software supply chains, making...
3 weeks ago cs.CR cs.AI cs.SE
PDF
Tool HIGH
Jiamin Chang, Minhui Xue, Ruoxi Sun +3 more
Recent advances in embodied Vision-Language Agentic Systems (VLAS), powered by large vision-language models (LVLMs), enable AI systems to perceive...
3 weeks ago cs.CV cs.AI
PDF
Benchmark HIGH
Euntae Kim, Soomin Han, Buru Chang
Large language models (LLMs) are increasingly used as co-authors in collaborative writing, where users begin with rough drafts and rely on LLMs to...
Attack HIGH
MinJae Jung, YongTaek Lim, Chaeyun Kim +3 more
While Large Language Models (LLMs) are widely used, they remain susceptible to jailbreak prompts that can elicit harmful or inappropriate responses....
Tool HIGH
Jiacheng Liang, Yao Ma, Tharindu Kumarage +5 more
Reinforcement Learning from Human Feedback (RLHF) is central to aligning Large Language Models (LLMs), yet it introduces a critical vulnerability: an...
3 weeks ago cs.AI cs.CR cs.LG
PDF
Attack HIGH
Hanrui Luo, Shreyank N Gowda
Detecting jailbreak behaviour in large language models remains challenging, particularly when strongly aligned models produce harmful outputs only...
3 weeks ago cs.CL cs.LG
PDF
Attack HIGH
Md Rysul Kabir, Zoran Tiganj
Open-weight language models can be rendered unsafe through several distinct interventions, but the resulting models may differ substantially in...
3 weeks ago cs.CR cs.AI cs.CL
PDF
Attack HIGH
Thamilvendhan Munirathinam
Current open-source prompt-injection detectors converge on two architectural choices: regular-expression pattern matching and fine-tuned transformer...
3 weeks ago cs.CR cs.CL
PDF
Attack HIGH
Wentao Zhang, Yan Zhuang, ZhuHang Zheng +3 more
Existing jamming attacks on Retrieval-Augmented Generation (RAG) systems typically induce explicit refusals or denial-of-service behaviors, which are...
3 weeks ago cs.CR cs.AI
PDF
Attack HIGH
Jin Zhao, Marta Knežević, Tanja Käser
Large Language Models (LLMs) are increasingly used in education, yet their default helpfulness often conflicts with pedagogical principles. Prior...
3 weeks ago cs.CR cs.AI
PDF
Benchmark HIGH
Parteek Jamwal, Minghao Shao, Boyuan Chen +15 more
Large Language Models (LLMs) have demonstrated remarkable capabilities across various cybersecurity tasks, including vulnerability classification,...
3 weeks ago cs.CR cs.AI cs.MA
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial