Patch Validation in Automated Vulnerability Repair
Zheng Yu, Wenxuan Shi, Xinqian Sun +3 more
Automated Vulnerability Repair (AVR) systems, especially those leveraging large language models (LLMs), have demonstrated promising results in...
2,077+ academic papers on AI security, attacks, and defenses
Showing 61–80 of 715 papers
Clear filtersZheng Yu, Wenxuan Shi, Xinqian Sun +3 more
Automated Vulnerability Repair (AVR) systems, especially those leveraging large language models (LLMs), have demonstrated promising results in...
Zheng Yu, Wenxuan Shi, Xinqian Sun +3 more
Automated Vulnerability Repair (AVR) systems, especially those leveraging large language models (LLMs), have demonstrated promising results in...
Jinman Wu, Yi Xie, Shiqian Zhao +1 more
Currently, open-sourced large language models (OSLLMs) have demonstrated remarkable generative performance. However, as their structure and weights...
Touseef Hasan, Blessing Airehenbuwa, Nitin Pundir +2 more
Large language models (LLMs) have shown remarkable capabilities in natural language processing tasks, yet their application in hardware security...
Yuanbo Li, Tianyang Xu, Cong Hu +3 more
The rapid progress of Multi-Modal Large Language Models (MLLMs) has significantly advanced downstream applications. However, this progress also...
Yuanbo Li, Tianyang Xu, Cong Hu +3 more
The rapid progress of Multi-Modal Large Language Models (MLLMs) has significantly advanced downstream applications. However, this progress also...
Max Landauer, Wolfgang Hotwagner, Thorina Boenke +2 more
Log data are essential for intrusion detection and forensic investigations. However, manual log analysis is tedious due to high data volumes,...
Junchen Li, Chao Qi, Rongzheng Wang +5 more
Retrieval-Augmented Generation (RAG) enhances the capabilities of large language models (LLMs) by incorporating external knowledge, but its reliance...
Wang Jian, Shen Hong, Ke Wei +1 more
While federated learning protects data privacy, it also makes the model update process vulnerable to long-term stealthy perturbations. Existing...
Yangyang Wei, Yijie Xu, Zhenyuan Li +2 more
Multi-Agent System is emerging as the \textit{de facto} standard for complex task orchestration. However, its reliance on autonomous execution and...
Neha Nagaraja, Lan Zhang, Zhilong Wang +2 more
Multimodal Large Language Models (MLLMs) integrate vision and text to power applications, but this integration introduces new vulnerabilities. We...
Zhi Xu, Jiaqi Li, Xiaotong Zhang +2 more
Large language models (LLMs) have achieved remarkable success across diverse applications but remain vulnerable to jailbreak attacks, where attackers...
Peter Horvath, Ilia Shumailov, Lukasz Chmielewski +2 more
The multi-million dollar investment required for modern machine learning (ML) has made large ML models a prime target for theft. In response, the...
Jiayao Wang, Mohammad Maruf Hasan, Yiping Zhang +5 more
Self-Supervised Learning (SSL) has emerged as a significant paradigm in representation learning thanks to its ability to learn without extensive...
Huw Day, Adrianna Jezierska, Jessica Woodgate
Large Language Models have intensified the scale and strategic manipulation of political discourse on social media, leading to conflict escalation....
Xiaoyi Pang, Xuanyi Hao, Pengyu Liu +3 more
Recent intelligent systems integrate powerful Large Language Models (LLMs) through APIs, but their trustworthiness may be critically undermined by...
Duoxun Tang, Dasen Dai, Jiyao Wang +3 more
Video-LLMs are increasingly deployed in safety-critical applications but are vulnerable to Energy-Latency Attacks (ELAs) that exhaust computational...
Xinyu Huang, Qiang Yang, Leming Shen +2 more
Embodied Large Language Models (LLMs) enable AI agents to interact with the physical world through natural language instructions and actions....
Masahiro Kaneko, Ayana Niwa, Timothy Baldwin
Fake news undermines societal trust and decision-making across politics, economics, health, and international relations, and in extreme cases...
Mingcheng Jiang, Jiancheng Huang, Jiangfei Wang +5 more
Static Application Security Testing (SAST) tools often suffer from high false positive rates, leading to alert fatigue that consumes valuable...
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act), and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial