Attack MEDIUM
Sajad U P
Phishing and related cyber threats are becoming more varied and technologically advanced. Among these, email-based phishing remains the most dominant...
5 months ago cs.CR cs.AI cs.LG
PDF
Attack MEDIUM
Shaowei Guan, Yu Zhai, Zhengyu Zhang +2 more
Large Language Models (LLMs) are increasingly vulnerable to adversarial attacks that can subtly manipulate their outputs. While various defense...
5 months ago cs.CR cs.AI
PDF
Benchmark MEDIUM
Shanmin Wang, Dongdong Zhao
Knowledge Distillation (KD) is essential for compressing large models, yet relying on pre-trained "teacher" models downloaded from third-party...
5 months ago cs.CR cs.AI cs.CV
PDF
Attack MEDIUM
Lucas Fenaux, Christopher Srinivasa, Florian Kerschbaum
Transparency and security are both central to Responsible AI, but they may conflict in adversarial settings. We investigate the strategic effect of...
5 months ago cs.LG cs.CR cs.GT
PDF
Benchmark MEDIUM
Yanbo Dai, Zongjie Li, Zhenlan Ji +1 more
Large language models (LLMs) have achieved remarkable success across a wide range of natural language processing tasks, demonstrating human-level...
Defense MEDIUM
Ruoxi Cheng, Haoxuan Ma, Teng Ma +1 more
Large Vision-Language Models (LVLMs) exhibit powerful reasoning capabilities but suffer sophisticated jailbreak vulnerabilities. Fundamentally,...
Attack MEDIUM
Farhad Abtahi, Fernando Seoane, Iván Pau +1 more
Healthcare AI systems face major vulnerabilities to data poisoning that current defenses and regulations cannot adequately address. We analyzed eight...
6 months ago cs.CR cs.AI
PDF
Benchmark MEDIUM
Zichao Wei, Jun Zeng, Ming Wen +8 more
Software vulnerabilities are increasing at an alarming rate. However, manual patching is both time-consuming and resource-intensive, while existing...
6 months ago cs.CR cs.SE
PDF
Benchmark MEDIUM
Feilong Wang, Fuqiang Liu
The integration of large language models (LLMs) into automated driving systems has opened new possibilities for reasoning and decision-making by...
6 months ago cs.LG cs.AI cs.CR
PDF
Benchmark MEDIUM
Guangke Chen, Yuhui Wang, Shouling Ji +2 more
Modern text-to-speech (TTS) systems, particularly those built on Large Audio-Language Models (LALMs), generate high-fidelity speech that faithfully...
6 months ago cs.SD cs.AI cs.CR
PDF
Tool MEDIUM
Dennis Wei, Ronny Luss, Xiaomeng Hu +6 more
Large Language Models (LLMs) have become ubiquitous in everyday life and are entering higher-stakes applications ranging from summarizing meeting...
6 months ago cs.CL cs.LG
PDF
Benchmark MEDIUM
Fred Heiding, Simon Lermen
We present an end-to-end demonstration of how attackers can exploit AI safety failures to harm vulnerable populations: from jailbreaking LLMs to...
6 months ago cs.CR cs.AI cs.CY
PDF
Defense MEDIUM
Jialin Wu, Kecen Li, Zhicong Huang +3 more
Many machine learning models are fine-tuned from large language models (LLMs) to achieve high performance in specialized domains like code...
6 months ago cs.CL cs.CR
PDF
Benchmark MEDIUM
Catherine Xia, Manar H. Alalfi
AI programming assistants have demonstrated a tendency to generate code containing basic security vulnerabilities. While developers are ultimately...
6 months ago cs.CR cs.AI
PDF
Survey MEDIUM
James Jin Kang, Dang Bui, Thanh Pham +1 more
The growing use of large language models in sensitive domains has exposed a critical weakness: the inability to ensure that private information can...
Survey MEDIUM
Gabrielle M Gauthier, Eesha Ali, Amna Asim +2 more
Human content moderators (CMs) routinely review distressing digital content at scale. Beyond exposure, the work context (e.g., workload, team...
Defense MEDIUM
Daniyal Ganiuly, Nurzhau Bolatbek
The increasing virtualization of fifth generation (5G) networks expands the attack surface of the user plane, making spoofing a persistent threat to...
6 months ago cs.CR cs.NI
PDF
Benchmark MEDIUM
Zexu Wang, Jiachi Chen, Zewei Lin +7 more
Smart contracts have significantly advanced blockchain technology, and digital signatures are crucial for reliable verification of contract...
6 months ago cs.CR cs.SE
PDF
Benchmark MEDIUM
Yunfei Yang, Xiaojun Chen, Yuexin Xuan +3 more
Model watermarking techniques can embed watermark information into the protected model for ownership declaration by constructing specific...
6 months ago cs.CR cs.LG
PDF
Benchmark MEDIUM
Kazuki Iwahana, Yusuke Yamasaki, Akira Ito +2 more
Backdoor attacks pose a critical threat to machine learning models, causing them to behave normally on clean data but misclassify poisoned data into...
6 months ago cs.LG cs.CR
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial