Benchmark MEDIUM
Muntasir Adnan, Carlos C. N. Kuhn
Large Language Models have become integral to software development, yet they frequently generate vulnerable code. Existing code vulnerability...
4 months ago cs.SE cs.AI
PDF
Attack MEDIUM
Davis Brown, Juan-Pablo Rivera, Dan Hendrycks +1 more
As frontier AIs become more powerful and costly to develop, adversaries have increasing incentives to steal model weights by mounting exfiltration...
4 months ago cs.CR cs.AI cs.LG
PDF
Benchmark MEDIUM
Zhuoran Tan, Run Hao, Jeremy Singer +2 more
Tool-augmented LLM agents raise new security risks: tool executions can introduce runtime-only behaviors, including prompt injection and unintended...
4 months ago cs.CR cs.SE
PDF
Attack MEDIUM
Jiajie Zhu, Xia Du, Xiaoyuan Liu +4 more
The rapid advancements in artificial intelligence have significantly accelerated the adoption of speech recognition technology, leading to its...
4 months ago cs.SD cs.CR cs.MM
PDF
Defense LOW
Rajiv Thummala, Katherine Winton, Luke Flores +2 more
Out-of-band screening of microcontrollers is a major gap in semiconductor supply chain security. High-assurance techniques such as X-ray and...
Benchmark MEDIUM
Milad Rahmati, Nima Rahmati
The proliferation of Internet of Things devices in critical infrastructure has created unprecedented cybersecurity challenges, necessitating...
4 months ago cs.CR cs.LG
PDF
Attack HIGH
M P V S Gopinadh, S Mahaboob Hussain
Large Language Models (LLMs) are integral to modern AI applications, but their safety alignment mechanisms can be bypassed through adversarial prompt...
4 months ago cs.CR cs.AI
PDF
Attack LOW
Zhenhong Zhou, Shilinlu Yan, Chuanpu Liu +3 more
Large language models (LLMs) are increasingly deployed in cost-sensitive and on-device scenarios, and safety guardrails have advanced mainly in...
Tool HIGH
Yueyan Dong, Minghui Xu, Qin Hu +5 more
Low-Rank Adaptation (LoRA) has become a popular solution for fine-tuning large language models (LLMs) in federated settings, dramatically reducing...
Tool LOW
Vidyut Sriram, Sawan Pandita, Achintya Lakshmanan +2 more
Large Language Models (LLMs) can generate code but often introduce security vulnerabilities, logical inconsistencies, and compilation errors. Prior...
4 months ago cs.CR cs.LG
PDF
Defense MEDIUM
Hyunjun Kim
Guardrail models are essential for ensuring the safety of Large Language Model (LLM) deployments, but processing full multi-turn conversation...
4 months ago cs.CL cs.AI
PDF
Benchmark MEDIUM
Muhammad Bilal, Omer Tariq, Hasan Ahmed
Timing and burst patterns can leak through encryption, and an adaptive adversary can exploit them. This undermines metadata-only detection in a...
4 months ago cs.CR cs.LG cs.NI
PDF
Attack HIGH
Md Mahbub Hasan, Marcus Sternhagen, Krishna Chandra Roy
Additive manufacturing (AM) is rapidly integrating into critical sectors such as aerospace, automotive, and healthcare. However, this cyber-physical...
4 months ago cs.CR cs.AI cs.LG
PDF
Attack MEDIUM
Nandish Chattopadhyay, Abdul Basit, Amira Guesmi +3 more
Adversarial attacks pose a significant challenge to the reliable deployment of machine learning models in EdgeAI applications, such as autonomous...
4 months ago cs.CR cs.AI
PDF
Benchmark LOW
Sixue Xing, Xuanye Xia, Kerui Wu +3 more
Clinical trial failure remains a central bottleneck in drug development, where minor protocol design flaws can irreversibly compromise outcomes...
4 months ago cs.AI cs.MA
PDF
Defense MEDIUM
Weijie Wang, Peizhuo Lv, Yan Wang +7 more
Graph Retrieval-Augmented Generation (GraphRAG) has emerged as a key technique for enhancing Large Language Models (LLMs) with proprietary Knowledge...
Attack MEDIUM
Fumiya Morimoto, Ryuto Morita, Satoshi Ono
Deep neural network-based classifiers are prone to errors when processing adversarial examples (AEs). AEs are minimally perturbed input data...
4 months ago cs.CR cs.LG cs.NE
PDF
Benchmark HIGH
Md Hasan Saju, Maher Muhtadi, Akramul Azim
The rapid advancement of Large Language Models (LLMs) presents new opportunities for automated software vulnerability detection, a crucial task in...
4 months ago cs.SE cs.AI
PDF
Attack HIGH
Haoran Gu, Handing Wang, Yi Mei +2 more
The widespread deployment of large language models (LLMs) has raised growing concerns about their misuse risks and associated safety issues. While...
4 months ago cs.CR cs.CL
PDF
Attack MEDIUM
Xiaoze Liu, Weichen Yu, Matt Fredrikson +2 more
The open-weight language model ecosystem is increasingly defined by model composition techniques (such as weight merging, speculative decoding, and...
4 months ago cs.LG cs.CL cs.CR
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial