Benchmark MEDIUM
Gary Ackerman, Zachary Kallenborn, Anna Wetzel +7 more
The potential for rapidly-evolving frontier artificial intelligence (AI) models, especially large language models (LLMs), to facilitate bioterrorism...
5 months ago cs.LG cs.AI cs.CY
PDF
Benchmark MEDIUM
Md Nazmul Haque, Elizabeth Lin, Lawrence Arkoh +2 more
Large Language Models for code (LLMs4Code) are increasingly used to generate software artifacts, including library and package recommendations in...
Benchmark MEDIUM
Lukas Johannes Möller
The escalating sophistication and variety of cyber threats have rendered static honeypots inadequate, necessitating adaptive, intelligence-driven...
5 months ago cs.CR cs.DC cs.LG
PDF
Benchmark MEDIUM
Jordan Taylor, Sid Black, Dillon Bowen +10 more
Future AI systems could conceal their capabilities ('sandbagging') during evaluations, potentially misleading developers and auditors. We...
Benchmark LOW
Sangha Park, Seungryong Yoo, Jisoo Mok +1 more
Although Multimodal Large Language Models (MLLMs) have advanced substantially, they remain vulnerable to object hallucination caused by language...
5 months ago cs.CV cs.AI
PDF
Benchmark LOW
Alisha Ukani, Hamed Haddadi, Ali Shahin Shamsabadi +1 more
This paper presents a systematic evaluation of the privacy behaviors and attributes of eight recent, popular browser agents. Browser agents are...
Benchmark MEDIUM
JV Roig
We investigate how large language models (LLMs) fail when operating as autonomous agents with tool-use capabilities. Using the Kamiwaza Agentic Merit...
5 months ago cs.AI cs.SE
PDF
Benchmark MEDIUM
Qiwei Tian, Chenhao Lin, Zhengyu Zhao +1 more
To address the trade-off between robustness and performance for robust VLM, we observe that function words could incur vulnerability of VLMs against...
5 months ago cs.LG cs.CL
PDF
Benchmark HIGH
Xiaojun Jia, Jie Liao, Qi Guo +11 more
Recent advances in multi-modal large language models (MLLMs) have enabled unified perception-reasoning capabilities, yet these systems remain highly...
5 months ago cs.CR cs.CV
PDF
Benchmark MEDIUM
Cheng Cheng, Jinqiu Yang
Code-focused Large Language Models (LLMs), such as CodeX and Star-Coder, have demonstrated remarkable capabilities in enhancing developer...
Benchmark HIGH
Caleb Gross
Security research is fundamentally a problem of resource constraint and consequent prioritization. There is simply too much attack surface and too...
5 months ago cs.CR cs.IR
PDF
Benchmark HIGH
Xiuyuan Chen, Jian Zhao, Yuxiang He +10 more
While the deployment of large language models (LLMs) in high-value industries continues to expand, the systematic assessment of their safety against...
Benchmark MEDIUM
Ashish Hooda, Mihai Christodorescu, Chuangang Ren +3 more
Machine learning (ML) models for code clone detection determine whether two pieces of code are semantically equivalent, which in turn is a key...
5 months ago cs.SE cs.AI
PDF
Benchmark LOW
Feijiang Han
Malicious WebShells pose a significant and evolving threat by compromising critical digital infrastructures and endangering public services in...
5 months ago cs.CR cs.AI cs.LG
PDF
Benchmark MEDIUM
Chenlin Xu, Lei Zhang, Lituan Wang +5 more
Due to the scarcity of annotated data and the substantial computational costs of model, conventional tuning methods in medical image segmentation...
Benchmark MEDIUM
Yizhou Zhao, Zhiwei Steven Wu, Adam Block
Watermarking aims to embed hidden signals in generated text that can be reliably detected when given access to a secret key. Open-weight language...
5 months ago cs.LG cs.AI cs.CR
PDF
Benchmark MEDIUM
Tengyun Ma, Jiaqi Yao, Daojing He +4 more
Large Language Models (LLMs) have emerged as powerful tools for diverse applications. However, their uniform token processing paradigm introduces...
5 months ago cs.CR cs.AI
PDF
Benchmark HIGH
Songwen Zhao, Danqing Wang, Kexun Zhang +3 more
Vibe coding is a new programming paradigm in which human engineers instruct large language model (LLM) agents to complete complex coding tasks with...
5 months ago cs.SE cs.CL
PDF
Benchmark MEDIUM
Junyu Wang, Changjia Zhu, Yuanbo Zhou +3 more
This paper studies how multimodal large language models (MLLMs) undermine the security guarantees of visual CAPTCHA. We identify the attack surface...
5 months ago cs.CR cs.AI
PDF
Benchmark LOW
Han Luo, Guy Laban
Large language models (LLMs) now mediate many web-based mental-health, crisis, and other emotionally sensitive services, yet their psychosocial...
5 months ago cs.AI cs.HC cs.MA
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial