Attack MEDIUM
Miao Yu, Zhenhong Zhou, Moayad Aloqaily +5 more
Fine-tuned Large Language Models (LLMs) are vulnerable to backdoor attacks through data poisoning, yet the internal mechanisms governing these...
6 months ago cs.CR cs.AI
PDF
Tool MEDIUM
Prakhar Sharma, Haohuang Wen, Vinod Yegneswaran +3 more
The evolution toward 6G networks is being accelerated by the Open Radio Access Network (O-RAN) paradigm -- an open, interoperable architecture that...
6 months ago cs.CR cs.AI cs.LG
PDF
Attack HIGH
Haibo Tong, Dongcheng Zhao, Guobin Shen +4 more
The remarkable capabilities of Large Language Models (LLMs) have raised significant safety concerns, particularly regarding "jailbreak" attacks that...
6 months ago cs.CR cs.AI
PDF
Defense MEDIUM
Wei Huang, De-Tian Chu, Lin-Yuan Bai +6 more
Modern email spam and phishing attacks have evolved far beyond keyword blacklists or simple heuristics. Adversaries now craft multi-modal campaigns...
6 months ago cs.LG cs.CR
PDF
Attack MEDIUM
Jiahao Huo, Shuliang Liu, Bin Wang +5 more
Semantic-level watermarking (SWM) for large language models (LLMs) enhances watermarking robustness against text modifications and paraphrasing...
6 months ago cs.CR cs.CL
PDF
Attack HIGH
Runqi Lin, Alasdair Paren, Suqin Yuan +4 more
The integration of new modalities enhances the capabilities of multimodal large language models (MLLMs) but also introduces additional...
Tool HIGH
Ping He, Changjiang Li, Binbin Zhao +2 more
The remarkable capability of large language models (LLMs) has led to the wide application of LLM-based agents in various domains. To standardize...
6 months ago cs.CR cs.AI cs.SE
PDF
Benchmark LOW
Panagiotis Michelakis, Yiannis Hadjiyiannis, Dimitrios Stamoulis
Evaluating AI agents that solve real-world tasks through function-call sequences remains an open challenge. Existing agentic benchmarks often reduce...
Attack HIGH
Hanbo Huang, Yiran Zhang, Hao Zheng +4 more
Large Language Models (LLMs) watermarking has shown promise in detecting AI-generated content and mitigating misuse, with prior work claiming...
Attack MEDIUM
Anh Tu Ngo, Anupam Chattopadhyay, Subhamoy Maitra
In this paper we show that cryptographic backdoors in a neural network (NN) can be highly effective in two directions, namely mounting the attacks as...
6 months ago cs.CR cs.LG
PDF
Benchmark HIGH
Wenkai Guo, Xuefeng Liu, Haolin Wang +3 more
Fine-tuning large language models (LLMs) with local data is a widely adopted approach for organizations seeking to adapt LLMs to their specific...
6 months ago cs.LG cs.CL cs.CR
PDF
Tool HIGH
Adam Swanda, Amy Chang, Alexander Chen +3 more
The widespread adoption of Large Language Models (LLMs) has revolutionized AI deployment, enabling autonomous and semi-autonomous applications across...
6 months ago cs.CR cs.AI
PDF
Defense HIGH
Maria Chiper, Radu Tudor Ionescu
Phishing attacks targeting both organizations and individuals are becoming an increasingly significant threat as technology advances. Current...
6 months ago cs.CR cs.AI cs.CL
PDF
Defense LOW
Dana A Abdullah, Dana Rasul Hamad, Bishar Rasheed Ibrahim +3 more
Altered fingerprint recognition (AFR) is challenging for biometric verification in applications such as border control, forensics, and fiscal...
6 months ago cs.CV cs.CR cs.LG
PDF
Attack HIGH
Atousa Arzanipour, Rouzbeh Behnia, Reza Ebrahimi +1 more
Retrieval-Augmented Generation (RAG) is an emerging approach in natural language processing that combines large language models (LLMs) with external...
6 months ago cs.CR cs.AI
PDF
Attack MEDIUM
Xiaofan Li, Xing Gao
In recent years, various software supply chain (SSC) attacks have posed significant risks to the global community. Severe consequences may arise if...
6 months ago cs.CR cs.AI
PDF
Benchmark MEDIUM
Wenhan Wu, Zheyuan Liu, Chongyang Gao +2 more
Current LLM unlearning methods face a critical security vulnerability that undermines their fundamental purpose: while they appear to successfully...
6 months ago cs.LG cs.AI
PDF
Attack HIGH
Tanmay Khule, Stefan Marksteiner, Jose Alguindigue +3 more
In modern automotive development, security testing is critical for safeguarding systems against increasingly advanced threats. Attack trees are...
6 months ago cs.CR cs.AI
PDF
Benchmark MEDIUM
Lauren Deason, Adam Bali, Ciprian Bejean +20 more
Today's cyber defenders are overwhelmed by a deluge of security alerts, threat intelligence signals, and shifting business context, creating an...
6 months ago cs.CR cs.AI
PDF
Attack HIGH
Md Jueal Mia, M. Hadi Amini
Vision-Language Models (VLMs) have remarkable abilities in generating multimodal reasoning tasks. However, potential misuse or safety alignment...
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial