Attack HIGH
Matta Varun, Ajay Kumar Dhakar, Yuan Hong +1 more
Graph neural network (GNN) is a powerful tool for analyzing graph-structured data. However, their vulnerability to adversarial attacks raises serious...
4 days ago cs.LG cs.CR
PDF
Benchmark HIGH
Sen Fang, Weiyuan Ding, Zhezhen Cao +2 more
Large Language Models (LLMs) are increasingly adopted for vulnerability detection, yet their reasoning remains fundamentally unsound. We identify a...
4 days ago cs.SE cs.AI cs.CR
PDF
Attack HIGH
Yusheng Zheng, Yiwei Yang, Wei Zhang +1 more
LLM agent frameworks increasingly offer checkpoint-restore for error recovery and exploration, advising developers to make external tool calls safe...
Benchmark MEDIUM
Jiahao Chen, Zhiming Zhao, Yuwen Pu +4 more
Federated learning (FL) has attracted substantial attention in both academia and industry, yet its practical security posture remains poorly...
Benchmark MEDIUM
Hung Yun Tseng, Wuzhen Li, Blerina Gkotse +1 more
The potential of Large Language Models (LLMs) to provide harmful information remains a significant concern due to the vast breadth of illegal queries...
Benchmark MEDIUM
Christopher J. Agostino, Quan Le Thien, Nayan D'Souza +1 more
Understanding the fundamental mechanisms governing the production of meaning in the processing of natural language is critical for designing safe,...
4 days ago cs.CL cs.AI cs.HC
PDF
Attack HIGH
Wenjing Hong, Zhonghua Rong, Li Wang +5 more
Large Language Models (LLMs) have been widely deployed, especially through free Web-based applications that expose them to diverse user-generated...
4 days ago cs.CR cs.AI
PDF
Attack MEDIUM
Vicenç Torra, Maria Bras-Amorós
Memory poisoning attacks for Agentic AI and multi-agent systems (MAS) have recently caught attention. It is partially due to the fact that Large...
5 days ago cs.CR cs.AI
PDF
Benchmark MEDIUM
Fazhong Liu, Zhuoyan Chen, Tu Lan +6 more
Autonomous coding agents are increasingly integrated into software development workflows, offering capabilities that extend beyond code suggestion to...
5 days ago cs.CR cs.AI
PDF
Benchmark LOW
Dong Yan, Jian Liang, Yanbo Wang +3 more
Test-Time Reinforcement Learning (TTRL) enables Large Language Models (LLMs) to enhance reasoning capabilities on unlabeled test streams by deriving...
5 days ago cs.LG cs.AI
PDF
Defense LOW
Anders Giovanni Møller, Elisa Bassignana, Francesco Pierri +1 more
The ubiquity of multimedia content is reshaping online information spaces, particularly in social media environments. At the same time, search is...
5 days ago cs.CY cs.CL cs.HC
PDF
Attack MEDIUM
Qi Luo, Minghui Xu, Dongxiao Yu +1 more
Many learning systems now use graph data in which each node also contains text, such as papers with abstracts or users with posts. Because these...
5 days ago cs.LG cs.CR
PDF
Attack MEDIUM
Dong-Xiao Zhang, Hu Lou, Jun-Jie Zhang +2 more
Adversarial vulnerability in vision and hallucination in large language models are conventionally viewed as separate problems, each addressed with...
5 days ago cs.LG cs.IT physics.comp-ph
PDF
Tool MEDIUM
Vincent Siu, Jingxuan He, Kyle Montgomery +4 more
Security in LLM agents is inherently contextual. For example, the same action taken by an agent may represent legitimate behavior or a security...
5 days ago cs.CR cs.AI
PDF
Defense MEDIUM
Shawn Li, Yue Zhao
Large language model (LLM) agents increasingly rely on external tools (file operations, API calls, database transactions) to autonomously complete...
5 days ago cs.CR cs.AI cs.LG
PDF
Attack HIGH
Toan Tran, Olivera Kotevska, Li Xiong
Membership inference attacks (MIAs), which enable adversaries to determine whether specific data points were part of a model's training dataset, have...
5 days ago cs.CR cs.LG
PDF
Defense LOW
Rohan Siva, Kai Cheung, Lichi Li +1 more
Modern machine learning systems rely on complex data engineering workflows to extract, transform, and load (ELT) data into production pipelines....
5 days ago cs.SE cs.AI cs.CL
PDF
Benchmark LOW
Zou Qiang
Large language models (LLMs) demonstrate strong generative capabilities but remain vulnerable to hallucination and unreliable reasoning under...
5 days ago cs.AI cs.CL
PDF
Attack HIGH
Aravind Krishnan, Karolina Stańczak, Dietrich Klakow
As Spoken Language Models (SLMs) integrate speech and text modalities, they inherit the safety vulnerabilities of their LLM backbone and an expanded...
Attack HIGH
Sheng Liu, Panos Papadimitratos
FL has emerged as a transformative paradigm for ITS, notably camera-based Road Condition Classification (RCC). However, by enabling collaboration,...
5 days ago cs.CR cs.AI cs.DC
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial