Benchmark MEDIUM
He Yang Yuan, Xin Wang, Kundi Yao +3 more
Logging code plays an important role in software systems by recording key events and behaviors, which are essential for debugging and monitoring....
3 weeks ago cs.SE cs.AI cs.CR
PDF
Benchmark MEDIUM
Girish, Mohd Mujtaba Akhtar, Orchid Chetia Phukan +1 more
The rapid advancement of Audio Large Language Models (ALMs), driven by Neural Audio Codecs (NACs), has led to the emergence of highly realistic...
Benchmark MEDIUM
Robert Stanley, Avi Verma, Lillian Tsai +2 more
AI agents promise to serve as general-purpose personal assistants for their users, which requires them to have access to private user data (e.g.,...
3 weeks ago cs.CR cs.AI cs.OS
PDF
Benchmark MEDIUM
Alankrit Chona, Igor Kozlov, Ambuj Kumar
We introduce the Cyber Defense Benchmark, a benchmark for measuring how well large language model (LLM) agents perform the core SOC analyst task of...
3 weeks ago cs.CR cs.AI
PDF
Benchmark MEDIUM
Alankrit Chona, Igor Kozlov, Ambuj Kumar
We introduce the Cyber Defense Benchmark, a benchmark for measuring how well large language model (LLM) agents perform the core SOC analyst task of...
3 weeks ago cs.CR cs.AI
PDF
Defense MEDIUM
Divyesh Gabbireddy, Suman Saha
Cross-site scripting (XSS) remains a persistent web security vulnerability, especially because obfuscation can change the surface form of a malicious...
3 weeks ago cs.CR cs.LG cs.SE
PDF
Defense MEDIUM
Sarang Nambiar, Dhruv Pradhan, Ezekiel Soremekun
Pre-trained machine learning models (PTMs) are commonly provided via Model Hubs (e.g., Hugging Face) in standard formats like Pickles to facilitate...
3 weeks ago cs.CR cs.SE
PDF
Benchmark MEDIUM
Ali Al-Kaswan, Maksim Plotnikov, Maxim Hájek +3 more
Large Language Model (LLM) agents are increasingly proposed for autonomous cybersecurity tasks, but their capabilities in realistic offensive...
3 weeks ago cs.AI cs.CR cs.SE
PDF
Defense MEDIUM
Kun Wang, Cheng Qian, Miao Yu +6 more
Multimodal Large Language Models (MLLMs) have achieved remarkable success in cross-modal understanding and generation, yet their deployment is...
3 weeks ago cs.CR cs.AI
PDF
Defense MEDIUM
Hugo Lyons Keenan, Christopher Leckie, Sarah Erfani
We can often verify the correctness of neural network outputs using ground truth labels, but we cannot reliably determine whether the output was...
3 weeks ago cs.LG cs.CR
PDF
Benchmark MEDIUM
Ahson Saiyed, Sabrina Sadiekh, Chirag Agarwal
Large Language Models (LLMs) remain vulnerable to optimization-based jailbreak attacks that exploit internal gradient structure. While Sparse...
3 weeks ago cs.LG cs.AI cs.CL
PDF
Attack MEDIUM
Ruixuan Liu, David Evans, Li Xiong
Indistinguishability properties such as differential privacy bounds or low empirically measured membership inference are widely treated as proxies to...
3 weeks ago cs.CR cs.CL cs.LG
PDF
Benchmark MEDIUM
Sina Abdollahi, Mohammad M Maheri, Javad Forough +5 more
Large Language Model (LLM) agents provide powerful automation capabilities, but they also create a substantially broader attack surface than...
3 weeks ago cs.CR cs.OS
PDF
Defense MEDIUM
Ziyang Liu
Hosted-LLM providers have a silent-substitution incentive: advertise a stronger model while serving cheaper replies. Probe-after-return schemes such...
3 weeks ago cs.CR cs.AI
PDF
Defense MEDIUM
Dongcheng Zhang, Yiqing Jiang
Existing AI agent safety benchmarks focus on generic criminal harm (cybercrime, harassment, weapon synthesis), leaving a systematic blind spot for a...
3 weeks ago cs.CR cs.AI cs.CL
PDF
Benchmark MEDIUM
Ziyao Tang, Pengkun Jiao, Bin Zhu +3 more
Video Large Language Models (Vid-LLMs) have demonstrated remarkable performance in video understanding tasks, yet their robustness under...
Defense MEDIUM
Ting Zhang, Yikun Li, Chengran Yang +15 more
Software vulnerabilities remain one of the most persistent threats to modern digital infrastructure. While static application security testing (SAST)...
Benchmark MEDIUM
Shozo Saeki, Minoru Kawahara, Hirohisa Aman
A nearest-neighbor framework is a fundamental tool for various applications involving Large Language Models (LLMs) and Visual Language Models (VLMs)....
Tool MEDIUM
Yuan Fang, Yiming Luo, Aimin Zhou +1 more
Ensuring the safety of large language models (LLMs) requires robust red teaming, yet the systematic synthesis of high-quality toxic data remains...
3 weeks ago cs.CL cs.AI
PDF
Benchmark MEDIUM
Yihao Zou, Tianming Zheng, Futai Zou +1 more
Fuzzing has become a widely adopted technique for vulnerability discovery, yet it remains ineffective for structured-input programs due to strict...
3 weeks ago cs.CR cs.PL
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial