Tool MEDIUM
Neha Nagaraja, Hayretdin Bahsi
Large Language Models (LLMs) are increasingly integrated into safety-critical workflows, yet existing security analyses remain fragmented and often...
2 months ago cs.CR cs.AI
PDF
Benchmark MEDIUM
Yige Li, Wei Zhao, Zhe Li +6 more
Backdoor mechanisms have traditionally been studied as security threats that compromise the integrity of machine learning models. However, the same...
2 months ago cs.CR cs.AI
PDF
Survey LOW
Saroj Mishra, Suman Niroula, Umesh Yadav +3 more
Retrieval-Augmented Generation (RAG) systems are increasingly evolving into agentic architectures where large language models autonomously coordinate...
2 months ago cs.AI cs.CL cs.CR
PDF
Attack MEDIUM
Eduard Hirsch, Kristina Raab, Tobias J. Bauer +1 more
IT systems are facing an increasing number of security threats, including advanced persistent attacks and future quantum-computing vulnerabilities....
2 months ago cs.CR cs.IR
PDF
Benchmark MEDIUM
Yuxu Ge
Autonomous agents powered by large language models introduce a class of execution-layer vulnerabilities -- prompt injection, retrieval poisoning, and...
2 months ago cs.CR cs.AI
PDF
Attack HIGH
Jialai Wang, Ya Wen, Zhongmou Liu +4 more
Targeted bit-flip attacks (BFAs) exploit hardware faults to manipulate model parameters, posing a significant security threat. While prior work...
2 months ago cs.CR cs.AI
PDF
Tool MEDIUM
Punyajoy Saha, Sudipta Halder, Debjyoti Mondal +1 more
Safety alignment is critical for deploying large language models (LLMs) in real-world applications, yet most existing approaches rely on large...
2 months ago cs.CL cs.AI cs.LG
PDF
Attack HIGH
Ondřej Lukáš, Jihoon Shin, Emilia Rivas +6 more
Autonomous offensive agents often fail to transfer beyond the networks on which they are trained. We isolate a minimal but fundamental shift --...
2 months ago cs.CR cs.LG
PDF
Benchmark HIGH
Zheng Yu, Wenxuan Shi, Xinqian Sun +3 more
Automated Vulnerability Repair (AVR) systems, especially those leveraging large language models (LLMs), have demonstrated promising results in...
Benchmark HIGH
Zheng Yu, Wenxuan Shi, Xinqian Sun +3 more
Automated Vulnerability Repair (AVR) systems, especially those leveraging large language models (LLMs), have demonstrated promising results in...
Survey MEDIUM
Elzo Brito dos Santos Filho
AI-assisted software generation has increased development speed, but it has also amplified a persistent engineering problem: systems that are...
2 months ago cs.CR cs.AI
PDF
Attack MEDIUM
Donghwa Kang, Hojun Choe, Doohyun Kim +2 more
Deploying deep neural networks (DNNs) on edge devices exposes valuable intellectual property to model-stealing attacks. While TEE-shielded DNN...
Benchmark LOW
Yanbang Sun, Quan Luo, Yuelin Wang +6 more
Network protocols are the foundation of modern communication, yet their implementations often contain semantic vulnerabilities stemming from...
2 months ago cs.CR cs.CY
PDF
Defense MEDIUM
Xisen Jin, Michael Duan, Qin Lin +4 more
As AI agents become widely deployed as online services, users often rely on an agent developer's claim about how safety is enforced, which introduces...
2 months ago cs.CR cs.AI cs.CL
PDF
Defense MEDIUM
Jinman Wu, Yi Xie, Shen Lin +2 more
Safety alignment is often conceptualized as a monolithic process wherein harmfulness detection automatically triggers refusal. However, the...
2 months ago cs.CR cs.AI cs.LG
PDF
Attack HIGH
Jinman Wu, Yi Xie, Shiqian Zhao +1 more
Currently, open-sourced large language models (OSLLMs) have demonstrated remarkable generative performance. However, as their structure and weights...
2 months ago cs.CR cs.AI
PDF
Defense MEDIUM
Ved Sriraman, Adam Block
Best-of-N (BoN) sampling is a widely used inference-time alignment method for language models, whereby N candidate responses are sampled from a...
2 months ago cs.LG cs.AI
PDF
Benchmark LOW
Amirpasha Mozaffari, Amanda Duarte, Lina Teckentrup +8 more
The rapid adoption of AI in Earth system science promises unprecedented speed and fidelity in the generation of climate information. However, this...
2 months ago physics.ao-ph cs.AI cs.LG
PDF
Tool HIGH
Touseef Hasan, Blessing Airehenbuwa, Nitin Pundir +2 more
Large language models (LLMs) have shown remarkable capabilities in natural language processing tasks, yet their application in hardware security...
2 months ago cs.CR cs.AI
PDF
Defense LOW
Junchuan Zhao, Minh Duc Vu, Ye Wang
Neural codec language models enable high-quality discrete speech synthesis, yet their inference remains vulnerable to token-level artifacts and...
2 months ago cs.SD eess.AS
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial