An Empirical Study on the Security Vulnerabilities of GPTs
Tong Wu, Weibin Wu, Zibin Zheng
Equipped with various tools and knowledge, GPTs, one kind of customized AI agents based on OpenAI's large language models, have illustrated great...
2,077+ academic papers on AI security, attacks, and defenses
Showing 621–640 of 965 papers
Clear filtersTong Wu, Weibin Wu, Zibin Zheng
Equipped with various tools and knowledge, GPTs, one kind of customized AI agents based on OpenAI's large language models, have illustrated great...
Kaixiang Wang, Zhaojiacheng Zhou, Bunyod Suvonov +2 more
Large Language Model (LLM)-based Multi-Agent Systems (MAS) are susceptible to linguistic attacks that can trigger cascading failures across the...
Anudeex Shetty
Large Language Models (LLMs) have demonstrated exceptional capabilities in natural language understanding and generation. Based on these LLMs,...
Zeng Wang, Minghao Shao, Akashdeep Saha +4 more
Graph neural networks (GNNs) have shown promise in hardware security by learning structural motifs from netlist graphs. However, this reliance on...
Abeer Matar A. Almalky, Ziyan Wang, Mohaiminul Al Nahian +2 more
In recent years, large language models (LLMs) have achieved substantial advancements and are increasingly integrated into critical applications...
Mohaiminul Al Nahian, Abeer Matar A. Almalky, Gamana Aragonda +6 more
Adversarial weight perturbation has emerged as a concerning threat to LLMs that either use training privileges or system-level access to inject...
Shaona Ghosh, Barnaby Simkin, Kyriacos Shiarlis +9 more
This paper introduces a dynamic and actionable framework for securing agentic AI systems in enterprise deployment. We contend that safety and...
Gauri Pradhan, Joonas Jälkö, Santiago Zanella-Bèguelin +1 more
Training machine learning models with differential privacy (DP) limits an adversary's ability to infer sensitive information about the training data....
Rebeka Toth, Tamas Bisztray, Richard Dubniczky
Phishing and spam emails remain a major cybersecurity threat, with attackers increasingly leveraging Large Language Models (LLMs) to craft highly...
Rebeka Toth, Tamas Bisztray, Nils Gruschka
In this paper, we introduce a metadata-enriched generation framework (PhishFuzzer) that seeds real emails into Large Language Models (LLMs) to...
Di Zhu, Chen Xie, Ziwei Wang +1 more
New York City reports over one hundred thousand motor vehicle collisions each year, creating substantial injury and public health burden. We present...
Herman Errico, Jiquan Ngiam, Shanita Sojan
The Model Context Protocol (MCP) replaces static, developer-controlled API integrations with more dynamic, user-driven agent systems, which also...
Jaehwan Park, Kyungchan Lim, Seonhye Park +1 more
The advent of Artificial Intelligence (AI), particularly large language models (LLMs), has revolutionized software development by enabling developers...
Wei He, Kai Han, Hang Zhou +4 more
The optimization of large language models (LLMs) remains a critical challenge, particularly as model scaling exacerbates sensitivity to algorithmic...
Momoko Shiraishi, Yinzhi Cao, Takahiro Shinagawa
Command-line interface (CLI) fuzzing tests programs by mutating both command-line options and input file contents, thus enabling discovery of...
Sidahmed Benabderrahmane, James Cheney, Talal Rahwan
Advanced Persistent Threats (APTs) pose a significant challenge in cybersecurity due to their stealthy and long-term nature. Modern supervised...
Xuebo Qiu, Mingqi Lv, Yimei Zhang +4 more
Provenance-based threat hunting identifies Advanced Persistent Threats (APTs) on endpoints by correlating attack patterns described in Cyber Threat...
David Amebley, Sayanton Dibbo
In the age of agentic AI, the growing deployment of multi-modal models (MMs) has introduced new attack vectors that can leak sensitive training data...
Abhijeet Pathak, Suvadra Barua, Dinesh Gudimetla +4 more
Large language models (LLMs) and autonomous coding agents are increasingly used to generate software across a wide range of domains. Yet a core...
Steven Peh
Large Language Models (LLMs) remain vulnerable to prompt injection attacks, representing the most significant security threat in production...
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act), and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial