Attack MEDIUM
Chenghao Du, Quanfeng Huang, Tingxuan Tang +3 more
Large Language Models (LLMs) have transformed software development, enabling AI-powered applications known as LLM-based agents that promise to...
Attack HIGH
Alex Irpan, Alexander Matt Turner, Mark Kurzeja +2 more
An LLM's factuality and refusal training can be compromised by simple changes to a prompt. Models often adopt user beliefs (sycophancy) or satisfy...
4 months ago cs.LG cs.AI
PDF
Attack HIGH
David Schmotz, Sahar Abdelnabi, Maksym Andriushchenko
Enabling continual learning in LLMs remains a key unresolved research challenge. In a recent announcement, a frontier LLM company made a step towards...
Attack MEDIUM
Haohua Duan, Liyao Xiang, Xin Zhang
Watermarking schemes for large language models (LLMs) have been proposed to identify the source of the generated text, mitigating the potential...
4 months ago cs.CR cs.CL cs.LG
PDF
Attack MEDIUM
Lisha Shuai, Jiuling Dong, Nan Zhang +5 more
Local Differential Privacy (LDP) is a widely adopted privacy-protection model in the Internet of Things (IoT) due to its lightweight, decentralized,...
Attack MEDIUM
Guangzhi Su, Shuchang Huang, Yutong Ke +3 more
Multimodal large language models (MLLMs) have achieved impressive performance across diverse tasks by jointly reasoning over textual and visual...
4 months ago cs.LG cs.CR
PDF
Attack LOW
Svetlana Churina, Niranjan Chebrolu, Kokil Jaidka
We show that continual pretraining on plausible misinformation can overwrite specific factual knowledge in large language models without degrading...
4 months ago cs.LG cs.CR
PDF
Attack HIGH
Zirui Cheng, Jikai Sun, Anjun Gao +4 more
Large language models (LLMs) have transformed natural language processing (NLP), enabling applications from content generation to decision support....
4 months ago cs.CR cs.IR cs.LG
PDF
Attack MEDIUM
Elizabeth Lin, Jonah Ghebremichael, William Enck +5 more
Software supply chains, while providing immense economic and software development value, are only as strong as their weakest link. Over the past...
Attack LOW
Sathwik Narkedimilli, N V Saran Kumar, Aswath Babu H +4 more
Current quantum machine learning approaches often face challenges balancing predictive accuracy, robustness, and interpretability. To address this,...
4 months ago cs.LG cs.CR
PDF
Attack LOW
Viktoriia Zinkovich, Anton Antonov, Andrei Spiridonov +6 more
Multimodal large language models (MLLMs) have shown impressive capabilities in vision-language tasks such as reasoning segmentation, where models...
4 months ago cs.CL cs.CV
PDF
Attack HIGH
Ziyao Cui, Minxing Zhang, Jian Pei
Privacy concerns have become increasingly critical in modern AI and data science applications, where sensitive information is collected, analyzed,...
4 months ago cs.CR cs.LG
PDF
Attack HIGH
Yufan Liu, Wanqian Zhang, Huashan Chen +4 more
Despite rapid advancements in text-to-image (T2I) models, their safety mechanisms are vulnerable to adversarial prompts, which maliciously generate...
Attack HIGH
Yuchong Xie, Zesen Liu, Mingyu Luo +7 more
Modern coding agents integrated into IDEs orchestrate powerful tools and high-privilege system access, creating a high-stakes attack surface. Prior...
4 months ago cs.CR cs.AI
PDF
Attack MEDIUM
Myeongseob Ko, Nikhil Reddy Billa, Adam Nguyen +3 more
The memorization of training data in large language models (LLMs) poses significant privacy and copyright concerns. Existing data extraction methods,...
5 months ago cs.CL cs.AI
PDF
Attack HIGH
Zesen Liu, Zhixiang Zhang, Yuchong Xie +1 more
LLM-powered agents often use prompt compression to reduce inference costs, but this introduces a new security risk. Compression modules, which are...
5 months ago cs.CR cs.AI
PDF
Attack MEDIUM
Bin Wang, YiLu Zhong, MiDi Wan +4 more
Large language models (LLMs) have become indispensable for automated code generation, yet the quality and security of their outputs remain a critical...
5 months ago cs.CR cs.AI
PDF
Attack MEDIUM
Jiaxiang Liu, Jiawei Du, Xiao Liu +2 more
Pre-trained vision-language models (VLMs) such as CLIP have demonstrated strong zero-shot capabilities across diverse domains, yet remain highly...
Attack HIGH
Dongyi Liu, Jiangtong Li, Dawei Cheng +1 more
Graph Neural Networks(GNNs) are vulnerable to backdoor attacks, where adversaries implant malicious triggers to manipulate model predictions....
5 months ago cs.CR cs.LG
PDF
Attack MEDIUM
Devon A. Kelly, Christiana Chamon
Wide-bandgap (WBG) technologies offer unprecedented improvements in power system efficiency, size, and performance, but also introduce unique sensor...
5 months ago cs.CR cs.LG eess.SY
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial