Attack MEDIUM
Zhexi Lu, Hongliang Chi, Nathalie Baracaldo +3 more
Membership inference attacks (MIAs) pose a critical privacy threat to fine-tuned large language models (LLMs), especially when models are adapted to...
3 months ago cs.CR cs.LG
PDF
Attack HIGH
Hao Li, Yubing Ren, Yanan Cao +4 more
With the rapid development of cloud-based services, large language models (LLMs) have become increasingly accessible through various web platforms....
3 months ago cs.CR cs.CL
PDF
Attack HIGH
Joao Queiroz
Recent evidence shows that the versification of prompts constitutes a highly effective adversarial mechanism against aligned LLMs. The study...
3 months ago cs.CL cs.AI
PDF
Attack MEDIUM
Seok-Hyun Ga, Chun-Yen Chang
The rapid development of Generative AI is bringing innovative changes to education and assessment. As the prevalence of students utilizing AI for...
3 months ago cs.AI cs.CL cs.CY
PDF
Attack HIGH
Pablo Montaña-Fernández, Ines Ortega-Fernandez
Federated Learning is a machine learning setting that reduces direct data exposure, improving the privacy guarantees of machine learning models. Yet,...
3 months ago cs.LG cs.CR
PDF
Attack MEDIUM
Piercosma Bisconti, Marcello Galisai, Matteo Prandi +6 more
Safety mechanisms in LLMs remain vulnerable to attacks that reframe harmful requests through culturally coded structures. We introduce Adversarial...
3 months ago cs.CL cs.AI cs.CY
PDF
Attack HIGH
Xingfu Zhou, Pengfei Wang
Large Language Model (LLM) agents relying on external retrieval are increasingly deployed in high-stakes environments. While existing adversarial...
3 months ago cs.CR cs.AI
PDF
Attack HIGH
Yunhao Yao, Zhiqiang Wang, Haoran Cheng +3 more
The evolution of Large Language Models (LLMs) into Agentic AI has established the Model Context Protocol (MCP) as the standard for connecting...
3 months ago cs.CR cs.AI
PDF
Attack HIGH
Shuxin Zhao, Bo Lang, Nan Xiao +1 more
Object detection models deployed in real-world applications such as autonomous driving face serious threats from backdoor attacks. Despite their...
3 months ago cs.CV cs.CR
PDF
Attack HIGH
Sabrine Ennaji, Elhadj Benkhelifa, Luigi Vincenzo Mancini
Machine learning based intrusion detection systems are increasingly targeted by black box adversarial attacks, where attackers craft evasive inputs...
3 months ago cs.CR cs.AI
PDF
Attack MEDIUM
David Lindner, Charlie Griffin, Tomek Korbak +4 more
Automated control monitors could play an important role in overseeing highly capable AI agents that we do not fully trust. Prior work has explored...
3 months ago cs.CR cs.AI cs.MA
PDF
Attack HIGH
Karina Chichifoi, Fabio Merizzi, Michele Colajanni
Deep learning and federated learning (FL) are becoming powerful partners for next-generation weather forecasting. Deep learning enables...
3 months ago cs.LG cs.CR
PDF
Attack MEDIUM
Samruddhi Baviskar
We evaluate adversarial robustness in tabular machine learning models used in financial decision making. Using credit scoring and fraud detection...
3 months ago cs.LG cs.AI cs.CR
PDF
Attack MEDIUM
Mohammad Mahdi Razmjoo, Mohammad Mahdi Sharifian, Saeed Bagheri Shouraki
Despite their remarkable performance, deep neural networks exhibit a critical vulnerability: small, often imperceptible, adversarial perturbations...
3 months ago cs.LG cs.CR cs.CV
PDF
Attack MEDIUM
Li Lin, Siyuan Xin, Yang Cao +1 more
Watermarking large language models (LLMs) is vital for preventing their misuse, including the fabrication of fake news, plagiarism, and spam. It is...
3 months ago cs.CR cs.AI
PDF
Attack HIGH
Md. Hasib Ur Rahman
As Large Language Models (LLMs) become ubiquitous, the challenge of securing them against adversarial "jailbreaking" attacks has intensified. Current...
3 months ago cs.LG cs.AI
PDF
Attack HIGH
Yixin Tan, Zhe Yu, Jun Sakuma
Finetuning pretrained large language models (LLMs) has become the standard paradigm for developing downstream applications. However, its security...
3 months ago cs.CR cs.AI
PDF
Attack HIGH
Safwan Shaheer, G. M. Refatul Islam, Mohammad Rafid Hamid +3 more
Prompt injection attacks can compromise the security and stability of critical systems, from infrastructure to large web applications. This work...
3 months ago cs.CR cs.AI
PDF
Attack MEDIUM
Hua Ma, Ruoxi Sun, Minhui Xue +4 more
Accurate time-series forecasting is increasingly critical for planning and operations in low-carbon power systems. Emerging time-series large...
3 months ago cs.CR cs.LG
PDF
Attack HIGH
Peichun Hua, Hao Li, Shanghao Shi +2 more
Large Vision-Language Models (LVLMs) are vulnerable to a growing array of multimodal jailbreak attacks, necessitating defenses that are both...
3 months ago cs.CR cs.AI cs.CL
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial