Attack HIGH
Mukkesh Ganesh, Kaushik Iyer, Arun Baalaaji Sankar Ananthan
The Key Value(KV) cache is an important component for efficient inference in autoregressive Large Language Models (LLMs), but its role as a...
4 months ago cs.CR cs.AI
PDF
Attack HIGH
Yunhao Chen, Xin Wang, Juncheng Li +5 more
Automated red teaming frameworks for Large Language Models (LLMs) have become increasingly sophisticated, yet they share a fundamental limitation:...
4 months ago cs.CL cs.CR
PDF
Tool LOW
Samuel Nathanson, Alexander Lee, Catherine Chen Kieffer +7 more
Assurance for artificial intelligence (AI) systems remains fragmented across software supply-chain security, adversarial machine learning, and...
4 months ago cs.CR cs.AI cs.LG
PDF
Tool MEDIUM
Rathin Chandra Shit, Sharmila Subudhi
The security of autonomous vehicle networks is facing major challenges, owing to the complexity of sensor integration, real-time performance demands,...
4 months ago cs.CR cs.AI cs.LG
PDF
Attack HIGH
Haotian Jin, Yang Li, Haihui Fan +3 more
Backdoor attacks pose a serious threat to the security of large language models (LLMs), causing them to exhibit anomalous behavior under specific...
4 months ago cs.CR cs.AI
PDF
Attack HIGH
Samuel Nathanson, Rebecca Williams, Cynthia Matuszek
Large language models (LLMs) increasingly operate in multi-agent and safety-critical settings, raising open questions about how their vulnerabilities...
4 months ago cs.LG cs.AI cs.CL
PDF
Defense MEDIUM
JoonHo Lee, HyeonMin Cho, Jaewoong Yun +3 more
We present SGuard-v1, a lightweight safety guardrail for Large Language Models (LLMs), which comprises two specialized models to detect harmful...
4 months ago cs.CL cs.AI cs.CR
PDF
Attack MEDIUM
Onkar Shelar, Travis Desell
Large Language Models remain vulnerable to adversarial prompts that elicit toxic content even after safety alignment. We present ToxSearch, a...
4 months ago cs.NE cs.AI cs.CL
PDF
Attack HIGH
Jiaji Ma, Puja Trivedi, Danai Koutra
Text-attributed graphs (TAGs), which combine structural and textual node information, are ubiquitous across many domains. Recent work integrates...
4 months ago cs.CR cs.LG
PDF
Attack MEDIUM
Yuting Tan, Yi Huang, Zhuo Li
Backdoor attacks on large language models (LLMs) typically couple a secret trigger to an explicit malicious output. We show that this explicit...
4 months ago cs.LG cs.CR
PDF
Benchmark LOW
Yikun Li, Matteo Grella, Daniel Nahmias +5 more
In recent years, Infrastructure as Code (IaC) has emerged as a critical approach for managing and provisioning IT infrastructure through code and...
4 months ago cs.CR cs.SE
PDF
Attack HIGH
Hasini Jayathilaka
Prompt injection attacks are an emerging threat to large language models (LLMs), enabling malicious users to manipulate outputs through carefully...
Attack HIGH
Rui Wang, Zeming Wei, Xiyue Zhang +1 more
Deep Neural Networks (DNNs) are known to be vulnerable to various adversarial perturbations. To address the safety concerns arising from these...
4 months ago cs.LG cs.AI cs.CR
PDF
Attack HIGH
Gil Goren, Shahar Katz, Lior Wolf
Large Language Models (LLMs) are vulnerable to adversarial attacks that bypass safety guidelines and generate harmful content. Mitigating these...
Defense HIGH
Jie Chen, Liangmin Wang
Fuzzing is a widely used technique for detecting vulnerabilities in smart contracts, which generates transaction sequences to explore the execution...
4 months ago cs.CR cs.SE
PDF
Defense MEDIUM
Thong Bach, Dung Nguyen, Thao Minh Le +1 more
Large language models exhibit systematic vulnerabilities to adversarial attacks despite extensive safety alignment. We provide a mechanistic analysis...
Benchmark HIGH
Jiayu Li, Yunhan Zhao, Xiang Zheng +4 more
Vision-Language-Action (VLA) models enable robots to interpret natural-language instructions and perform diverse tasks, yet their integration of...
4 months ago cs.CR cs.AI cs.CV
PDF
Attack MEDIUM
Sajad U P
Phishing and related cyber threats are becoming more varied and technologically advanced. Among these, email-based phishing remains the most dominant...
4 months ago cs.CR cs.AI cs.LG
PDF
Attack MEDIUM
Shaowei Guan, Yu Zhai, Zhengyu Zhang +2 more
Large Language Models (LLMs) are increasingly vulnerable to adversarial attacks that can subtly manipulate their outputs. While various defense...
4 months ago cs.CR cs.AI
PDF
Benchmark MEDIUM
Shanmin Wang, Dongdong Zhao
Knowledge Distillation (KD) is essential for compressing large models, yet relying on pre-trained "teacher" models downloaded from third-party...
4 months ago cs.CR cs.AI cs.CV
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial