Red Teaming Large Reasoning Models
Jiawei Chen, Yang Yang, Chao Yu +6 more
Large Reasoning Models (LRMs) have emerged as a powerful advancement in multi-step reasoning tasks, offering enhanced transparency and logical...
2,529+ academic papers on AI security, attacks, and defenses
Showing 501–520 of 551 papers
Clear filtersJiawei Chen, Yang Yang, Chao Yu +6 more
Large Reasoning Models (LRMs) have emerged as a powerful advancement in multi-step reasoning tasks, offering enhanced transparency and logical...
Aayush Garg, Zanis Ali Khan, Renzo Degiovanni +1 more
Automated vulnerability patching is crucial for software security, and recent advancements in Large Language Models (LLMs) present promising...
Peng Kuang, Xiangxiang Wang, Wentao Liu +2 more
Multimodal Large Language Models (MLLMs) have achieved impressive performances in mathematical reasoning, yet they remain vulnerable to visual...
Anudeex Shetty
Large Language Models (LLMs) have demonstrated exceptional capabilities in natural language understanding and generation. Based on these LLMs,...
Abeer Matar A. Almalky, Ziyan Wang, Mohaiminul Al Nahian +2 more
In recent years, large language models (LLMs) have achieved substantial advancements and are increasingly integrated into critical applications...
Mohaiminul Al Nahian, Abeer Matar A. Almalky, Gamana Aragonda +6 more
Adversarial weight perturbation has emerged as a concerning threat to LLMs that either use training privileges or system-level access to inject...
Gauri Pradhan, Joonas Jälkö, Santiago Zanella-Bèguelin +1 more
Training machine learning models with differential privacy (DP) limits an adversary's ability to infer sensitive information about the training data....
Junjian Wang, Lidan Zhao, Xi Sheryl Zhang
Ensuring the safety of embodied AI agents during task planning is critical for real-world deployment, especially in household environments where...
Rebeka Toth, Tamas Bisztray, Nils Gruschka
In this paper, we introduce a metadata-enriched generation framework (PhishFuzzer) that seeds real emails into Large Language Models (LLMs) to...
Rebeka Toth, Tamas Bisztray, Richard Dubniczky
Phishing and spam emails remain a major cybersecurity threat, with attackers increasingly leveraging Large Language Models (LLMs) to craft highly...
Di Zhu, Chen Xie, Ziwei Wang +1 more
New York City reports over one hundred thousand motor vehicle collisions each year, creating substantial injury and public health burden. We present...
Momoko Shiraishi, Yinzhi Cao, Takahiro Shinagawa
Command-line interface (CLI) fuzzing tests programs by mutating both command-line options and input file contents, thus enabling discovery of...
Xuebo Qiu, Mingqi Lv, Yimei Zhang +4 more
Provenance-based threat hunting identifies Advanced Persistent Threats (APTs) on endpoints by correlating attack patterns described in Cyber Threat...
David Amebley, Sayanton Dibbo
In the age of agentic AI, the growing deployment of multi-modal models (MMs) has introduced new attack vectors that can leak sensitive training data...
Abhijeet Pathak, Suvadra Barua, Dinesh Gudimetla +4 more
Large language models (LLMs) and autonomous coding agents are increasingly used to generate software across a wide range of domains. Yet a core...
Angelo Gaspar Diniz Nogueira, Kayua Oleques Paim, Hendrio Bragança +2 more
The ever-increasing number of Android devices and the accelerated evolution of malware, reaching over 35 million samples by 2024, highlight the...
Yu Cui, Yifei Liu, Hang Fu +4 more
Research on the safety evaluation of large language models (LLMs) has become extensive, driven by jailbreak studies that elicit unsafe responses....
Rong Feng, Suman Saha
Obfuscation poses a persistent challenge for software engineering tasks such as program comprehension, maintenance, testing, and vulnerability...
Andrew Maranhão Ventura D'addario
The integration of Large Language Models (LLMs) into healthcare demands a safety paradigm rooted in \textit{primum non nocere}. However, current...
Muhammad Usman Shahid, Chuadhry Mujeeb Ahmed, Rajiv Ranjan
The security of code generated by large language models (LLMs) is a significant concern, as studies indicate that such code often contains...
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act), and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial