Defense MEDIUM
Zeyu Zhang, Xiangxiang Dai, Ziyi Han +2 more
Large language models (LLMs) are typically governed by post-training alignment (e.g., RLHF or DPO), which yields a largely static policy during...
2 months ago cs.LG cs.AI
PDF
Attack HIGH
Yangyang Wei, Yijie Xu, Zhenyuan Li +2 more
Multi-Agent System is emerging as the \textit{de facto} standard for complex task orchestration. However, its reliance on autonomous execution and...
2 months ago cs.CR cs.MA
PDF
Attack HIGH
Neha Nagaraja, Lan Zhang, Zhilong Wang +2 more
Multimodal Large Language Models (MLLMs) integrate vision and text to power applications, but this integration introduces new vulnerabilities. We...
2 months ago cs.CV cs.AI cs.CR
PDF
Tool MEDIUM
Neha Nagaraja, Hayretdin Bahsi
While incorporating LLMs into systems offers significant benefits in critical application areas such as healthcare, new security challenges emerge...
2 months ago cs.CR cs.AI
PDF
Defense LOW
Brandon Yee, Krishna Sharma
MoltBook is a large-scale multi-agent coordination environment where over 770,000 autonomous LLM agents interact without human participation,...
2 months ago cs.MA cs.AI cs.SI
PDF
Tool LOW
Subramanyam Sahoo
Agentic AI systems - capable of goal interpretation, world modeling, planning, tool use, long-horizon operation, and autonomous coordination -...
2 months ago cs.CY cs.AI
PDF
Other MEDIUM
Difan Jiao, Di Wang, Lijie Hu
In-context learning enables large language models to perform novel tasks through few-shot demonstrations. However, demonstrations per se can...
2 months ago cs.LG cs.AI
PDF
Attack MEDIUM
Achyutha Menon, Magnus Saebo, Tyler Crosse +3 more
The accelerating adoption of language models (LMs) as agents for deployment in long-context tasks motivates a thorough understanding of goal drift:...
Benchmark MEDIUM
Aradhye Agarwal, Gurdit Siyan, Yash Pandya +3 more
Agentic language models operate in a fundamentally different safety regime than chat models: they must plan, call tools, and execute long-horizon...
Tool MEDIUM
Romina Omidi, Yun Dong, Binghui Wang
Google's SynthID-Text, the first ever production-ready generative watermark system for large language model, designs a novel Tournament-based method...
2 months ago cs.CR cs.AI
PDF
Benchmark MEDIUM
Yuhang Li, Yajie Wang, Xiangyun Tang +3 more
Secure aggregation is a foundational building block of privacy-preserving learning, yet achieving robustness under adversarial behavior remains...
Attack HIGH
Zhi Xu, Jiaqi Li, Xiaotong Zhang +2 more
Large language models (LLMs) have achieved remarkable success across diverse applications but remain vulnerable to jailbreak attacks, where attackers...
Benchmark MEDIUM
Pearl Mody, Mihir Panchal, Rishit Kar +2 more
Large language model (LLM) agents are increasingly deployed in long running workflows, where they must preserve user and task state across many...
Attack MEDIUM
Edouard Lansiaux
Federated Learning (FL) enables collaborative training of medical AI models across hospitals without centralizing patient data. However, the exchange...
2 months ago cs.CR cs.AI
PDF
Attack HIGH
Peter Horvath, Ilia Shumailov, Lukasz Chmielewski +2 more
The multi-million dollar investment required for modern machine learning (ML) has made large ML models a prime target for theft. In response, the...
Attack HIGH
Jiayao Wang, Mohammad Maruf Hasan, Yiping Zhang +5 more
Self-Supervised Learning (SSL) has emerged as a significant paradigm in representation learning thanks to its ability to learn without extensive...
Benchmark MEDIUM
Junjie Chu, Xinyue Shen, Ye Leng +3 more
The rapid growth of research in LLM safety makes it hard to track all advances. Benchmarks are therefore crucial for capturing key trends and...
2 months ago cs.CR cs.AI cs.SE
PDF
Benchmark LOW
Hongduan Tian, Xiao Feng, Ziyuan Zhao +3 more
Large language models (LLMs) have recently demonstrated impressive capabilities in reasoning tasks. Currently, mainstream LLM reasoning frameworks...
2 months ago cs.CL cs.LG
PDF
Attack MEDIUM
Shuyi Zhou, Zeen Song, Wenwen Qiang +4 more
Large Language Models remain vulnerable to adversarial prefix attacks (e.g., ``Sure, here is'') despite robust standard safety. We diagnose this...
Tool MEDIUM
Zixuan Xu, Tiancheng He, Huahui Yi +7 more
Vision-language models remain susceptible to multimodal jailbreaks and over-refusal because safety hinges on both visual evidence and user intent,...
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial