Defense MEDIUM
Hailin Liu, Eugene Ilyushin, Jie Ni +1 more
Large language model (LLM) agents are vulnerable to prompt-injection attacks that propagate through multi-step workflows, tool interactions, and...
3 weeks ago cs.AI cs.MA
PDF
Attack MEDIUM
Jianming Tong, Hanshen Xiao, Krishna Kumar Nair +5 more
Multi-user virtual reality enables immersive interaction. However, rendering avatars for numerous participants on each headset incurs prohibitive...
3 weeks ago cs.CR cs.AR cs.CV
PDF
Benchmark MEDIUM
Dongwook Lee, Eunwoo Song, Che Hyun Lee +2 more
While recent Spoken Language Models (SLMs) have been actively deployed in real-world scenarios, they lack the capability to discern Third-Party...
3 weeks ago cs.CL cs.AI cs.SD
PDF
Benchmark MEDIUM
Rina Mishra, Gaurav Varshney, Doddipatla Sesha Sahithi
The rapid adoption of open-source Large Language Models (LLMs) in offline and enterprise environments has introduced a largely unexamined security...
Other MEDIUM
XiangRui Zhang, Qiang Li, Haining Wang
Binary analysis increasingly relies on large language models (LLMs) to perform semantic reasoning over complex program behaviors. However, existing...
Attack MEDIUM
Xuanli He, Bilgehan Sel, Faizan Ali +3 more
Large Language Models (LLMs) are increasingly exposed to adaptive jailbreaking, particularly in high-stakes Chemical, Biological, Radiological, and...
3 weeks ago cs.CL cs.CR
PDF
Attack MEDIUM
Firas Ben Hmida, Philemon Hailemariam, Kashif Ali Khan +1 more
Deep neural networks (DNNs) remain largely opaque at inference time, limiting our ability to detect and diagnose malicious input manipulations such...
Attack MEDIUM
Pavel Chizhov, Egor Bogomolov, Ivan P. Yamshchikov
Efficiency and safety of Large Language Models (LLMs), among other factors, rely on the quality of tokenization. A good tokenizer not only improves...
Benchmark MEDIUM
Djiré Albérick Euraste, Kaboré Abdoul Kader, Jordan Samhi +3 more
The lack of transparency about code datasets used to train large language models (LLMs) makes it difficult to detect, evaluate, and mitigate data...
Survey MEDIUM
Yi Ting Shen, Kentaroh Toyoda, Alex Leung
The rapid proliferation of Model Context Protocol (MCP)-based agentic systems has introduced a new category of security threats that existing...
3 weeks ago cs.CR cs.AI
PDF
Benchmark MEDIUM
Xixun Lin, Yang Liu, Yancheng Chen +9 more
The performance of large language model (LLM) agents depends critically on the execution harness, the system layer that orchestrates tool use,...
3 weeks ago cs.CR cs.AI
PDF
Defense MEDIUM
Xiaohua Wang, Muzhao Tian, Yuqi Zeng +20 more
Reinforcement Learning from Human Feedback (RLHF) and related alignment paradigms have become central to steering large language models (LLMs) and...
Defense MEDIUM
Sujan Ghimire, Parsa Mirfasihi, Muhtasim Alam Chowdhury +6 more
The globalization of integrated circuit (IC) design and manufacturing has increased the exposure of hardware intellectual property (IP) to untrusted...
Benchmark MEDIUM
Prajas Wadekar, Venkata Sai Pranav Bachina, Kunal Bhosikar +2 more
3D Gaussian Splatting (3DGS) has recently enabled highly photorealistic 3D reconstruction from casually captured multi-view images. However, this...
4 weeks ago cs.CV cs.CR cs.LG
PDF
Benchmark MEDIUM
Joel Fokou
Autonomous AI agents are rapidly transitioning from experimental tools to operational infrastructure, with projections that 80% of enterprise...
4 weeks ago cs.CR cs.AI
PDF
Attack MEDIUM
Shaopeng Fu, Di Wang
Adversarial training (AT) is an effective defense for large language models (LLMs) against jailbreak attacks, but performing AT on LLMs is costly. To...
4 weeks ago cs.LG cs.CR stat.ML
PDF
Attack MEDIUM
Anasuya Chattopadhyay, Daniel Reti, Hans D. Schotten
Cloud networks increasingly rely on machine learning based Network Intrusion Detection Systems to defend against evolving cyber threats. However,...
4 weeks ago cs.LG cs.CR
PDF
Attack MEDIUM
Vladimir A. Mazin, Mikhail A. Zorin, Dmitrii S. Korzh +3 more
Passwords still remain a dominant authentication method, yet their security is routinely subverted by predictable user choices and large-scale...
4 weeks ago cs.CR cs.AI
PDF
Benchmark MEDIUM
Miit Daga, Swarna Priya Ramu
Organisations increasingly outsource privacy-sensitive data transformations to cloud providers, yet no practical mechanism lets the data owner verify...
4 weeks ago cs.CR cs.DB cs.LG
PDF
Benchmark MEDIUM
Rui Yin, Tianxu Han, Naen Xu +8 more
Safety-aligned large language models (LLMs) are increasingly deployed in real-world pipelines, yet this deployment also enlarges the supply-chain...
4 weeks ago cs.CR cs.CL
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial