CARScenes: Semantic VLM Dataset for Safe Autonomous Driving
Yuankai He, Weisong Shi
CAR-Scenes is a frame-level dataset for autonomous driving that enables training and evaluation of vision-language models (VLMs) for interpretable,...
2,560+ academic papers on AI security, attacks, and defenses
Showing 561–580 of 736 papers
Clear filtersYuankai He, Weisong Shi
CAR-Scenes is a frame-level dataset for autonomous driving that enables training and evaluation of vision-language models (VLMs) for interpretable,...
Jiarui Liu, Kaustubh Dhole, Yingheng Wang +7 more
Deductive reasoning is the process of deriving conclusions strictly from the given premises, without relying on external knowledge. We define honesty...
Zexu Wang, Jiachi Chen, Zewei Lin +7 more
Smart contracts have significantly advanced blockchain technology, and digital signatures are crucial for reliable verification of contract...
Shengbo Wang, Hong Sun, Ke Li
Interactive preference elicitation (IPE) aims to substantially reduce human effort while acquiring human preferences in wide personalization systems....
Yunfei Yang, Xiaojun Chen, Yuexin Xuan +3 more
Model watermarking techniques can embed watermark information into the protected model for ownership declaration by constructing specific...
Kazuki Iwahana, Yusuke Yamasaki, Akira Ito +2 more
Backdoor attacks pose a critical threat to machine learning models, causing them to behave normally on clean data but misclassify poisoned data into...
Junxiao Han, Zheng Yu, Lingfeng Bao +5 more
The widespread adoption of open-source software (OSS) has accelerated software innovation but also increased security risks due to the rapid...
Zhishen Sun, Guang Dai, Haishan Ye
LLMs demonstrate performance comparable to human abilities in complex tasks such as mathematical reasoning, but their robustness in mathematical...
Manh Nguyen, Sunil Gupta, Hung Le
Large Language Models (LLMs) exhibit strong performance across various natural language processing (NLP) tasks but remain vulnerable to...
Binyan Xu, Fan Yang, Di Tang +2 more
Clean-image backdoor attacks, which use only label manipulation in training datasets to compromise deep neural networks, pose a significant threat to...
Marcin Podhajski, Jan Dubiński, Franziska Boenisch +3 more
Current graph neural network (GNN) model-stealing methods rely heavily on queries to the victim model, assuming no hard query limits. However, in...
Yilin Jiang, Mingzi Zhang, Xuanyu Yin +5 more
Large Language Models for Simulating Professions (SP-LLMs), particularly as teachers, are pivotal for personalized education. However, ensuring their...
Nicy Scaria, Silvester John Joseph Kennedy, Deepak Subramani
Small Language Models (SLMs) are increasingly being deployed in resource-constrained environments, yet their behavioral robustness to data...
Dachuan Lin, Guobin Shen, Zihao Yang +3 more
Safety evaluation of large language models (LLMs) increasingly relies on LLM-as-a-judge pipelines, but strong judges can still be expensive to use at...
Azanzi Jiomekong, Jean Bikim, Patricia Negoue +1 more
Evaluating semantic tables interpretation (STI) systems, (particularly, those based on Large Language Models- LLMs) especially in domain-specific...
Amr Gomaa, Ahmed Salem, Sahar Abdelnabi
As language models evolve into autonomous agents that act and communicate on behalf of users, ensuring safety in multi-agent ecosystems becomes a...
Ishan Kavathekar, Hemang Jain, Ameya Rathod +2 more
Large Language Models (LLMs) have demonstrated strong capabilities as autonomous agents through tool use, planning, and decision-making abilities,...
Hadi Reisizadeh, Jiajun Ruan, Yiwei Chen +3 more
Unlearning in large language models (LLMs) is critical for regulatory compliance and for building ethical generative AI systems that avoid producing...
Cyril Vallez, Alexander Sternfeld, Andrei Kucharavy +1 more
As the role of Large Language Models (LLM)-based coding assistants in software development becomes more critical, so does the role of the bugs they...
Shiyin Lin
Software fuzzing has become a cornerstone in automated vulnerability discovery, yet existing mutation strategies often lack semantic awareness,...
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act), and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial