Large Language Models for Cyber Security
Raunak Somani, Aswani Kumar Cherukuri
This paper studies the integration off Large Language Models into cybersecurity tools and protocols. The main issue discussed in this paper is how...
2,583+ academic papers on AI security, attacks, and defenses
Showing 2021–2040 of 2,583 papers
Raunak Somani, Aswani Kumar Cherukuri
This paper studies the integration off Large Language Models into cybersecurity tools and protocols. The main issue discussed in this paper is how...
Pedro Pereira, José Gouveia, João Vitorino +2 more
Magecart skimming attacks have emerged as a significant threat to client-side security and user trust in online payment systems. This paper addresses...
Tim Beyer, Jonas Dornbusch, Jakob Steimle +3 more
The rapid expansion of research on Large Language Model (LLM) safety and robustness has produced a fragmented and oftentimes buggy ecosystem of...
Hongwei Yao, Yun Xia, Shuo Shao +3 more
Large language models (LLMs) increasingly employ guardrails to enforce ethical, legal, and application-specific constraints on their outputs. While...
Oshando Johnson, Alexandra Fomina, Ranjith Krishnamurthy +3 more
The prevalence of security vulnerabilities has prompted companies to adopt static application security testing (SAST) tools for vulnerability...
Hirohane Takagi, Gouki Minegishi, Shota Kizawa +2 more
Although behavioral studies have documented numerical reasoning errors in large language models (LLMs), the underlying representational mechanisms...
Hao Zhu, Jia Li, Cuiyun Gao +7 more
Large language models (LLMs) have achieved remarkable progress in code understanding tasks. However, they demonstrate limited performance in...
Shiyin Lin
Software fuzzing has become a cornerstone in automated vulnerability discovery, yet existing mutation strategies often lack semantic awareness,...
Mohammad Atif Quamar, Mohammad Areeb, Mikhail Kuznetsov +2 more
Aligning large language models (LLMs) with human values is crucial for safe deployment. Inference-time techniques offer granular control over...
Geoff McDonald, Jonathan Bar Or
Large Language Models (LLMs) are increasingly deployed in sensitive domains including healthcare, legal services, and confidential communications,...
Wendong Xu, Chujie Chen, He Xiao +8 more
Large Language Model (LLM) inference services demand exceptionally high availability and low latency, yet multi-GPU Tensor Parallelism (TP) makes...
Botao 'Amber' Hu, Helena Rong
As the "agentic web" takes shape-billions of AI agents (often LLM-powered) autonomously transacting and collaborating-trust shifts from human...
Yize Liu, Yunyun Hou, Aina Sui
Large Language Models (LLMs) have been widely deployed across various applications, yet their potential security and ethical risks have raised...
Amy Chang, Nicholas Conley, Harish Santhanalakshmi Ganesan +1 more
Open-weight models provide researchers and developers with accessible foundations for diverse downstream applications. We tested the safety and...
Bryce-Allen Bagley, Navin Khoshnan
The complexity of human cognition has meant that psychology makes more use of theory and conceptual models than perhaps any other biomedical field....
Qi Li, Jianjun Xu, Pingtao Wei +8 more
With the widespread application of Large Language Models (LLMs), their associated security issues have become increasingly prominent, severely...
Rishi Rajesh Shah, Chen Henry Wu, Shashwat Saxena +3 more
Recent advances in long-context language models (LMs) have enabled million-token inputs, expanding their capabilities across complex tasks like...
Jon Kutasov, Chloe Loughridge, Yuqi Sun +4 more
As AI systems become more capable and widely deployed as agents, ensuring their safe operation becomes critical. AI control offers one approach to...
Chloe Loughridge, Paul Colognese, Avery Griffin +3 more
As AI deployments become more complex and high-stakes, it becomes increasingly important to be able to estimate their risk. AI control is one...
Gian Maria Campedelli
While the possibility of reaching human-like Artificial Intelligence (AI) remains controversial, the likelihood that the future will be characterized...
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act), and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial