Position: Privacy Is Not Just Memorization!
Niloofar Mireshghallah, Tianshi Li
The discourse on privacy risks in Large Language Models (LLMs) has disproportionately focused on verbatim memorization of training data, while a...
2,529+ academic papers on AI security, attacks, and defenses
Showing 1141–1160 of 1,207 papers
Clear filtersNiloofar Mireshghallah, Tianshi Li
The discourse on privacy risks in Large Language Models (LLMs) has disproportionately focused on verbatim memorization of training data, while a...
Youwei Bao, Shuhan Yang, Hyunsoo Yang
Deterministic pseudo random number generators (PRNGs) used in generative artificial intelligence (GAI) models produce predictable patterns vulnerable...
Zhenyu Pan, Yiting Zhang, Zhuo Liu +13 more
LLM-based multi-agent systems excel at planning, tool use, and role coordination, but their openness and interaction complexity also expose them to...
Luoxi Tang, Yuqiao Meng, Ankita Patra +3 more
Large Language Models (LLMs) are intensively used to assist security analysts in counteracting the rapid exploitation of cyber threats, wherein LLMs...
Jaiden Fairoze, Sanjam Garg, Keewoo Lee +1 more
As large language models (LLMs) advance, ensuring AI safety and alignment is paramount. One popular approach is prompt guards, lightweight mechanisms...
Luca Cotti, Idilio Drago, Anisa Rula +2 more
System logs represent a valuable source of Cyber Threat Intelligence (CTI), capturing attacker behaviors, exploited vulnerabilities, and traces of...
Guobin Shen, Dongcheng Zhao, Haibo Tong +3 more
Ensuring Large Language Model (LLM) safety remains challenging due to the absence of universal standards and reliable content validators, making it...
Yicheng Lang, Yihua Zhang, Chongyu Fan +3 more
Large language model (LLM) unlearning aims to surgically remove the influence of undesired data or knowledge from an existing model while preserving...
Yen-Shan Chen, Sian-Yao Huang, Cheng-Lin Yang +1 more
Existing data poisoning attacks on retrieval-augmented generation (RAG) systems scale poorly because they require costly optimization of poisoned...
Andrew Gan, Zahra Ghodsi
Machine learning systems increasingly rely on open-source artifacts such as datasets and models that are created or hosted by other parties. The...
Hongbo Liu, Jiannong Cao, Bo Yang +7 more
The rapid advancement of large language models (LLMs) in recent years has revolutionized the AI landscape. However, the deployment model and usage of...
Tsubasa Takahashi, Shojiro Yamabe, Futa Waseda +1 more
Differential Attention (DA) has been proposed as a refinement to standard attention, suppressing redundant or noisy context through a subtractive...
Yu Yan, Siqi Lu, Yang Gao +4 more
Recently, Bit-Flip Attack (BFA) has garnered widespread attention for its ability to compromise software system integrity remotely through hardware...
Dalal Alharthi, Ivan Roberto Kawaminami Garcia
Large Language Models (LLMs) have gained prominence in domains including cloud security and forensics. Yet cloud forensic investigations still rely...
Dalal Alharthi, Ivan Roberto Kawaminami Garcia
Large language models have gained widespread prominence, yet their vulnerability to prompt injection and other adversarial attacks remains a critical...
Samar Fares, Nurbek Tastan, Noor Hussein +1 more
Generative models can generate photorealistic images at scale. This raises urgent concerns about the ability to detect synthetically generated images...
Ehsan Aghaei, Sarthak Jain, Prashanth Arun +1 more
Effective analysis of cybersecurity and threat intelligence data demands language models that can interpret specialized terminology, complex document...
Luis Burbano, Diego Ortiz, Qi Sun +5 more
Embodied Artificial Intelligence (AI) promises to handle edge cases in robotic vehicle systems where data is scarce by using common-sense reasoning...
Anshul Nasery, Edoardo Contente, Alkin Kaz +2 more
Model fingerprinting has emerged as a promising paradigm for claiming model ownership. However, robustness evaluations of these schemes have mostly...
Matheus Vinicius da Silva de Oliveira, Jonathan de Andrade Silva, Awdren de Lima Fontao
Large Language Models (LLMs) are widely used across multiple domains but continue to raise concerns regarding security and fairness. Beyond known...
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act), and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial