On Stealing Graph Neural Network Models
Marcin Podhajski, Jan Dubiński, Franziska Boenisch +3 more
Current graph neural network (GNN) model-stealing methods rely heavily on queries to the victim model, assuming no hard query limits. However, in...
2,529+ academic papers on AI security, attacks, and defenses
Showing 341–360 of 440 papers
Clear filtersMarcin Podhajski, Jan Dubiński, Franziska Boenisch +3 more
Current graph neural network (GNN) model-stealing methods rely heavily on queries to the victim model, assuming no hard query limits. However, in...
Yilin Jiang, Mingzi Zhang, Xuanyu Yin +5 more
Large Language Models for Simulating Professions (SP-LLMs), particularly as teachers, are pivotal for personalized education. However, ensuring their...
Nicy Scaria, Silvester John Joseph Kennedy, Deepak Subramani
Small Language Models (SLMs) are increasingly being deployed in resource-constrained environments, yet their behavioral robustness to data...
Dachuan Lin, Guobin Shen, Zihao Yang +3 more
Safety evaluation of large language models (LLMs) increasingly relies on LLM-as-a-judge pipelines, but strong judges can still be expensive to use at...
Amr Gomaa, Ahmed Salem, Sahar Abdelnabi
As language models evolve into autonomous agents that act and communicate on behalf of users, ensuring safety in multi-agent ecosystems becomes a...
Ishan Kavathekar, Hemang Jain, Ameya Rathod +2 more
Large Language Models (LLMs) have demonstrated strong capabilities as autonomous agents through tool use, planning, and decision-making abilities,...
Hadi Reisizadeh, Jiajun Ruan, Yiwei Chen +3 more
Unlearning in large language models (LLMs) is critical for regulatory compliance and for building ethical generative AI systems that avoid producing...
Cyril Vallez, Alexander Sternfeld, Andrei Kucharavy +1 more
As the role of Large Language Models (LLM)-based coding assistants in software development becomes more critical, so does the role of the bugs they...
Shiyin Lin
Software fuzzing has become a cornerstone in automated vulnerability discovery, yet existing mutation strategies often lack semantic awareness,...
Jon Kutasov, Chloe Loughridge, Yuqi Sun +4 more
As AI systems become more capable and widely deployed as agents, ensuring their safe operation becomes critical. AI control offers one approach to...
Patrick Karlsen, Even Eilertsen
This paper investigates some of the risks introduced by "LLM poisoning," the intentional or unintentional introduction of malicious or biased data...
Hanzhong Liang, Yue Duan, Xing Su +5 more
As the Web3 ecosystem evolves toward a multi-chain architecture, cross-chain bridges have become critical infrastructure for enabling...
Ariyan Hossain, Khondokar Mohammad Ahanaf Hannan, Rakinul Haque +4 more
Gender bias in language models has gained increasing attention in the field of natural language processing. Encoder-based transformer models, which...
Heehwan Kim, Sungjune Park, Daeseon Choi
Large Language Models (LLMs) are generally equipped with guardrails to block the generation of harmful responses. However, existing defenses always...
Arnabh Borah, Md Tanvirul Alam, Nidhi Rastogi
Security applications are increasingly relying on large language models (LLMs) for cyber threat detection; however, their opaque reasoning often...
Zishuo Zheng, Vidhisha Balachandran, Chan Young Park +2 more
As large language model (LLM) based systems take on high-stakes roles in real-world decision-making, they must reconcile competing instructions from...
Shaked Zychlinski, Yuval Kainan
Large Language Models (LLMs) are susceptible to jailbreak attacks where malicious prompts are disguised using ciphers and character-level encodings...
Yingjia Wang, Ting Qiao, Xing Liu +3 more
The rapid advancement of deep neural networks (DNNs) heavily relies on large-scale, high-quality datasets. However, unauthorized commercial use of...
Zheng Zhang, Haonan Li, Xingyu Li +2 more
Bug bisection has been an important security task that aims to understand the range of software versions impacted by a bug, i.e., identifying the...
André V. Duarte, Xuying li, Bin Zeng +3 more
If we cannot inspect the training data of a large language model (LLM), how can we ever know what it has seen? We believe the most compelling...
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act), and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial