Attack HIGH
Yunhao Yao, Zhiqiang Wang, Haoran Cheng +3 more
The evolution of Large Language Models (LLMs) into Agentic AI has established the Model Context Protocol (MCP) as the standard for connecting...
4 months ago cs.CR cs.AI
PDF
Attack HIGH
Shuxin Zhao, Bo Lang, Nan Xiao +1 more
Object detection models deployed in real-world applications such as autonomous driving face serious threats from backdoor attacks. Despite their...
4 months ago cs.CV cs.CR
PDF
Attack HIGH
Sabrine Ennaji, Elhadj Benkhelifa, Luigi Vincenzo Mancini
Machine learning based intrusion detection systems are increasingly targeted by black box adversarial attacks, where attackers craft evasive inputs...
4 months ago cs.CR cs.AI
PDF
Attack MEDIUM
David Lindner, Charlie Griffin, Tomek Korbak +4 more
Automated control monitors could play an important role in overseeing highly capable AI agents that we do not fully trust. Prior work has explored...
4 months ago cs.CR cs.AI cs.MA
PDF
Attack HIGH
Karina Chichifoi, Fabio Merizzi, Michele Colajanni
Deep learning and federated learning (FL) are becoming powerful partners for next-generation weather forecasting. Deep learning enables...
4 months ago cs.LG cs.CR
PDF
Attack MEDIUM
Samruddhi Baviskar
We evaluate adversarial robustness in tabular machine learning models used in financial decision making. Using credit scoring and fraud detection...
4 months ago cs.LG cs.AI cs.CR
PDF
Attack MEDIUM
Mohammad Mahdi Razmjoo, Mohammad Mahdi Sharifian, Saeed Bagheri Shouraki
Despite their remarkable performance, deep neural networks exhibit a critical vulnerability: small, often imperceptible, adversarial perturbations...
4 months ago cs.LG cs.CR cs.CV
PDF
Attack MEDIUM
Li Lin, Siyuan Xin, Yang Cao +1 more
Watermarking large language models (LLMs) is vital for preventing their misuse, including the fabrication of fake news, plagiarism, and spam. It is...
4 months ago cs.CR cs.AI
PDF
Attack HIGH
Md. Hasib Ur Rahman
As Large Language Models (LLMs) become ubiquitous, the challenge of securing them against adversarial "jailbreaking" attacks has intensified. Current...
4 months ago cs.LG cs.AI
PDF
Attack HIGH
Yixin Tan, Zhe Yu, Jun Sakuma
Finetuning pretrained large language models (LLMs) has become the standard paradigm for developing downstream applications. However, its security...
5 months ago cs.CR cs.AI
PDF
Attack HIGH
Safwan Shaheer, G. M. Refatul Islam, Mohammad Rafid Hamid +3 more
Prompt injection attacks can compromise the security and stability of critical systems, from infrastructure to large web applications. This work...
5 months ago cs.CR cs.AI
PDF
Attack MEDIUM
Hua Ma, Ruoxi Sun, Minhui Xue +4 more
Accurate time-series forecasting is increasingly critical for planning and operations in low-carbon power systems. Emerging time-series large...
5 months ago cs.CR cs.LG
PDF
Attack HIGH
Peichun Hua, Hao Li, Shanghao Shi +2 more
Large Vision-Language Models (LVLMs) are vulnerable to a growing array of multimodal jailbreak attacks, necessitating defenses that are both...
5 months ago cs.CR cs.AI cs.CL
PDF
Attack HIGH
Jie Ma, Junqing Zhang, Guanxiong Shen +2 more
Radio frequency fingerprint identification (RFFI) is an emerging technique for the lightweight authentication of wireless Internet of things (IoT)...
5 months ago cs.CR cs.LG
PDF
Attack MEDIUM
Jamal Al-Karaki, Muhammad Al-Zafar Khan, Rand Derar Mohammad Al Athamneh
The scarcity of cyberattack data hinders the development of robust intrusion detection systems. This paper introduces PHANTOM, a novel adversarial...
5 months ago cs.CR cs.AI cs.LG
PDF
Attack HIGH
Jing Cui, Yufei Han, Jianbin Jiao +1 more
Backdoor attacks embed malicious behaviors into Large Language Models (LLMs), enabling adversaries to trigger harmful outputs or bypass safety...
5 months ago cs.CR cs.AI
PDF
Attack MEDIUM
Neha, Tarunpreet Bhatia
Intrusion Detection Systems (IDS) are critical components in safeguarding 5G/6G networks from both internal and external cyber threats. While...
5 months ago cs.CR cs.LG
PDF
Attack HIGH
Khurram Khalil, Khaza Anuarul Hoque
Generative Artificial Intelligence models, such as Large Language Models (LLMs) and Large Vision Models (VLMs), exhibit state-of-the-art performance...
5 months ago cs.CR cs.AI
PDF
Attack HIGH
Mohamed Afane, Abhishek Satyam, Ke Chen +3 more
Backdoor attacks create significant security threats to language models by embedding hidden triggers that manipulate model behavior during inference,...
5 months ago cs.CR cs.CL
PDF
Attack HIGH
Reachal Wang, Yuqi Jia, Neil Zhenqiang Gong
Prompt injection attacks aim to contaminate the input data of an LLM to mislead it into completing an attacker-chosen task instead of the intended...
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial