An Empirical Study on the Security Vulnerabilities of GPTs
Tong Wu, Weibin Wu, Zibin Zheng
Equipped with various tools and knowledge, GPTs, one kind of customized AI agents based on OpenAI's large language models, have illustrated great...
2,077+ academic papers on AI security, attacks, and defenses
Showing 161–180 of 259 papers
Clear filtersTong Wu, Weibin Wu, Zibin Zheng
Equipped with various tools and knowledge, GPTs, one kind of customized AI agents based on OpenAI's large language models, have illustrated great...
Zeng Wang, Minghao Shao, Akashdeep Saha +4 more
Graph neural networks (GNNs) have shown promise in hardware security by learning structural motifs from netlist graphs. However, this reliance on...
Herman Errico, Jiquan Ngiam, Shanita Sojan
The Model Context Protocol (MCP) replaces static, developer-controlled API integrations with more dynamic, user-driven agent systems, which also...
Sidahmed Benabderrahmane, James Cheney, Talal Rahwan
Advanced Persistent Threats (APTs) pose a significant challenge in cybersecurity due to their stealthy and long-term nature. Modern supervised...
Steven Peh
Large Language Models (LLMs) remain vulnerable to prompt injection attacks, representing the most significant security threat in production...
Adarsh Kumarappan, Ayushi Mehrotra
The SmoothLLM defense provides a certification guarantee against jailbreaking attacks, but it relies on a strict "k-unstable" assumption that rarely...
Itay Hazan, Yael Mathov, Guy Shtar +2 more
Securing AI agents powered by Large Language Models (LLMs) represents one of the most critical challenges in AI security today. Unlike traditional...
Atharv Singh Patlan, Peiyao Sheng, S. Ashwin Hebbar +2 more
Language agents are rapidly expanding from single-user assistants to multi-user collaborators in shared workspaces and groups. However, today's...
Tom Perel
The recent boom and rapid integration of Large Language Models (LLMs) into a wide range of applications warrants a deeper understanding of their...
Huseein Jawad, Nicolas Brunel
System prompts are critical for guiding the behavior of Large Language Models (LLMs), yet they often contain proprietary logic or sensitive...
Fuyao Zhang, Jiaming Zhang, Che Wang +6 more
The reliance of mobile GUI agents on Multimodal Large Language Models (MLLMs) introduces a severe privacy vulnerability: screenshots containing...
Ayush Chaudhary, Sisir Doppalpudi
The deployment of robust malware detection systems in big data environments requires careful consideration of both security effectiveness and...
Thomas Rivasseau
Current Large Language Model alignment research mostly focuses on improving model robustness against adversarial attacks and misbehavior by training...
Onkar Shelar, Travis Desell
Large Language Models remain vulnerable to adversarial prompts that elicit toxic content even after safety alignment. We present ToxSearch, a...
Yuting Tan, Yi Huang, Zhuo Li
Backdoor attacks on large language models (LLMs) typically couple a secret trigger to an explicit malicious output. We show that this explicit...
Sajad U P
Phishing and related cyber threats are becoming more varied and technologically advanced. Among these, email-based phishing remains the most dominant...
Shaowei Guan, Yu Zhai, Zhengyu Zhang +2 more
Large Language Models (LLMs) are increasingly vulnerable to adversarial attacks that can subtly manipulate their outputs. While various defense...
Lucas Fenaux, Christopher Srinivasa, Florian Kerschbaum
Transparency and security are both central to Responsible AI, but they may conflict in adversarial settings. We investigate the strategic effect of...
Farhad Abtahi, Fernando Seoane, Iván Pau +1 more
Healthcare AI systems face major vulnerabilities to data poisoning that current defenses and regulations cannot adequately address. We analyzed eight...
Zixun Xiong, Gaoyi Wu, Qingyang Yu +5 more
Given the high cost of large language model (LLM) training from scratch, safeguarding LLM intellectual property (IP) has become increasingly crucial....
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act), and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial