Defense MEDIUM
Hong-Hanh Nguyen-Le, Van-Tuan Tran, Dinh-Thuc Nguyen +1 more
The rapid advancement of generators (e.g., StyleGAN, Midjourney, DALL-E) has produced highly realistic synthetic images, posing significant...
4 months ago cs.LG cs.AI cs.CR
PDF
Defense MEDIUM
Swastik Bhattacharya, Sanjay Das, Anand Menon +3 more
Deep Neural Networks (DNNs) continue to grow in complexity with Large Language Models (LLMs) incorporating vast numbers of parameters. Handling these...
4 months ago cs.AR cs.LG
PDF
Defense MEDIUM
Samih Fadli
Large language model safety is usually assessed with static benchmarks, but key failures are dynamic: value drift under distribution shift, jailbreak...
4 months ago cs.CL cs.AI cs.LG
PDF
Defense MEDIUM
Zhaoxin Zhang, Borui Chen, Yiming Hu +3 more
Recent research on large language model (LLM) jailbreaks has primarily focused on techniques that bypass safety mechanisms to elicit overtly harmful...
Defense MEDIUM
Zheyu Lin, Jirui Yang, Yukui Qiu +3 more
Evaluating the safety robustness of LLMs is critical for their deployment. However, mainstream Red Teaming methods rely on online generation and...
4 months ago cs.LG cs.CR
PDF
Defense MEDIUM
Quoc Viet Vo, Tashreque M. Haq, Paul Montague +3 more
Certified defenses promise provable robustness guarantees. We study the malicious exploitation of probabilistic certification frameworks to better...
4 months ago cs.LG cs.CR cs.CV
PDF
Defense LOW
Mohammad Marufur Rahman, Guanchu Wang, Kaixiong Zhou +2 more
Catastrophic forgetting is a longstanding challenge in continual learning, where models lose knowledge from earlier tasks when learning new ones....
4 months ago cs.LG cs.AI
PDF
Defense MEDIUM
JoonHo Lee, HyeonMin Cho, Jaewoong Yun +3 more
We present SGuard-v1, a lightweight safety guardrail for Large Language Models (LLMs), which comprises two specialized models to detect harmful...
4 months ago cs.CL cs.AI cs.CR
PDF
Defense HIGH
Jie Chen, Liangmin Wang
Fuzzing is a widely used technique for detecting vulnerabilities in smart contracts, which generates transaction sequences to explore the execution...
4 months ago cs.CR cs.SE
PDF
Defense MEDIUM
Thong Bach, Dung Nguyen, Thao Minh Le +1 more
Large language models exhibit systematic vulnerabilities to adversarial attacks despite extensive safety alignment. We provide a mechanistic analysis...
Defense MEDIUM
Ruoxi Cheng, Haoxuan Ma, Teng Ma +1 more
Large Vision-Language Models (LVLMs) exhibit powerful reasoning capabilities but suffer sophisticated jailbreak vulnerabilities. Fundamentally,...
Defense HIGH
Biagio Boi, Christian Esposito
Smart contracts have emerged as key components within decentralized environments, enabling the automation of transactions through self-executing...
Defense MEDIUM
Jialin Wu, Kecen Li, Zhicong Huang +3 more
Many machine learning models are fine-tuned from large language models (LLMs) to achieve high performance in specialized domains like code...
4 months ago cs.CL cs.CR
PDF
Defense MEDIUM
Daniyal Ganiuly, Nurzhau Bolatbek
The increasing virtualization of fifth generation (5G) networks expands the attack surface of the user plane, making spoofing a persistent threat to...
4 months ago cs.CR cs.NI
PDF
Defense LOW
Huzaifa Arif, Keerthiram Murugesan, Ching-Yun Ko +3 more
We propose patching for large language models (LLMs) like software versions, a lightweight and modular approach for addressing safety...
Defense MEDIUM
Binayak Kara, Ujjwal Sahua, Ciza Thomas +1 more
Securing Dew-Enabled Edge-of-Things (EoT) networks against sophisticated intrusions is a critical challenge. This paper presents HybridGuard, a...
4 months ago cs.CR cs.AI cs.LG
PDF
Defense MEDIUM
Tyler Slater
Context: The integration of Large Language Models (LLMs) into core software systems is accelerating. However, existing software architecture patterns...
4 months ago cs.SE cs.AI cs.CR
PDF
Defense MEDIUM
Haonan Shi, Guoli Wang, Tu Ouyang +1 more
Small language models (SLMs) are increasingly deployed on edge devices, making their safety alignment crucial yet challenging. Current shallow...
4 months ago cs.CR cs.LG
PDF
Defense LOW
Dev Patel, Gabrielle Gervacio, Diekola Raimi +5 more
Large Language Models require substantial computational resources for inference, posing deployment challenges. While dynamic pruning offers superior...
4 months ago cs.LG cs.AI cs.CL
PDF
Defense MEDIUM
Oshando Johnson, Alexandra Fomina, Ranjith Krishnamurthy +3 more
The prevalence of security vulnerabilities has prompted companies to adopt static application security testing (SAST) tools for vulnerability...
4 months ago cs.SE cs.AI
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial