Attack MEDIUM
Abed K. Musaffar, Ambuj Singh, Francesco Bullo
Large language models (LLMs) are increasingly deployed in human-AI teams as support agents for complex tasks such as information retrieval,...
1 months ago cs.LG cs.AI cs.HC
PDF
Defense MEDIUM
Xinyue Liu, Niloofar Mireshghallah, Jane C. Ginsburg +1 more
Frontier LLM companies have repeatedly assured courts and regulators that their models do not store copies of training data. They further rely on...
1 months ago cs.CL cs.AI cs.CY
PDF
Tool MEDIUM
Uchi Uchibeke
AI agents today have passwords but no permission slips. They execute tool calls (fund transfers, database queries, shell commands, sub-agent...
1 months ago cs.CR cs.AI
PDF
Benchmark MEDIUM
Jiahao Chen, Zhiming Zhao, Yuwen Pu +4 more
Federated learning (FL) has attracted substantial attention in both academia and industry, yet its practical security posture remains poorly...
Benchmark MEDIUM
Hung Yun Tseng, Wuzhen Li, Blerina Gkotse +1 more
The potential of Large Language Models (LLMs) to provide harmful information remains a significant concern due to the vast breadth of illegal queries...
Benchmark MEDIUM
Christopher J. Agostino, Quan Le Thien, Nayan D'Souza +1 more
Understanding the fundamental mechanisms governing the production of meaning in the processing of natural language is critical for designing safe,...
1 months ago cs.CL cs.AI cs.HC
PDF
Attack MEDIUM
Vicenç Torra, Maria Bras-Amorós
Memory poisoning attacks for Agentic AI and multi-agent systems (MAS) have recently caught attention. It is partially due to the fact that Large...
1 months ago cs.CR cs.AI
PDF
Benchmark MEDIUM
Fazhong Liu, Zhuoyan Chen, Tu Lan +6 more
Autonomous coding agents are increasingly integrated into software development workflows, offering capabilities that extend beyond code suggestion to...
1 months ago cs.CR cs.AI
PDF
Attack MEDIUM
Qi Luo, Minghui Xu, Dongxiao Yu +1 more
Many learning systems now use graph data in which each node also contains text, such as papers with abstracts or users with posts. Because these...
1 months ago cs.LG cs.CR
PDF
Attack MEDIUM
Dong-Xiao Zhang, Hu Lou, Jun-Jie Zhang +2 more
Adversarial vulnerability in vision and hallucination in large language models are conventionally viewed as separate problems, each addressed with...
1 months ago cs.LG cs.IT physics.comp-ph
PDF
Tool MEDIUM
Vincent Siu, Jingxuan He, Kyle Montgomery +4 more
Security in LLM agents is inherently contextual. For example, the same action taken by an agent may represent legitimate behavior or a security...
1 months ago cs.CR cs.AI
PDF
Defense MEDIUM
Shawn Li, Yue Zhao
Large language model (LLM) agents increasingly rely on external tools (file operations, API calls, database transactions) to autonomously complete...
1 months ago cs.CR cs.AI cs.LG
PDF
Defense MEDIUM
Carlos Hinojosa, Clemens Grange, Bernard Ghanem
Vision-language models (VLMs) are increasingly deployed in real-world and embodied settings where safety decisions depend on visual context. However,...
1 months ago cs.CV cs.AI cs.CL
PDF
Benchmark MEDIUM
Zikang Ding, Junhao Li, Suling Wu +3 more
Model watermarking utilizes internal representations to protect the ownership of large language models (LLMs). However, these features inevitably...
1 months ago cs.CR cs.AI
PDF
Survey MEDIUM
Saket Sanjeev Chaturvedi, Joshua Bergerson, Tanwi Mallick
As large language models (LLMs) evolve into autonomous "AI scientists," they promise transformative advances but introduce novel vulnerabilities,...
1 months ago cs.CR cs.CV
PDF
Attack MEDIUM
Xavier Cadet, Aditya Vikram Singh, Harsh Mamania +6 more
Investigating cybersecurity incidents requires collecting and analyzing evidence from multiple log sources, including intrusion detection alerts,...
1 months ago cs.CR cs.AI
PDF
Attack MEDIUM
Xavier Cadet, Aditya Vikram Singh, Harsh Mamania +6 more
Investigating cybersecurity incidents requires collecting and analyzing evidence from multiple log sources, including intrusion detection alerts,...
1 months ago cs.CR cs.AI
PDF
Benchmark MEDIUM
Haocheng Li, Juepeng Zheng, Shuangxi Miao +4 more
Multimodal remote sensing semantic segmentation enhances scene interpretation by exploiting complementary physical cues from heterogeneous data....
Benchmark MEDIUM
Wanjun Du, Zifeng Yuan, Tingting Chen +3 more
Existing vision-language models (VLMs) have demonstrated impressive performance in reasoning-based segmentation. However, current benchmarks are...
1 months ago cs.CV cs.AI
PDF
Benchmark MEDIUM
Yuntong Zhang, Sungmin Kang, Ruijie Meng +2 more
Agentic AI has been a topic of great interest recently. A Large Language Model (LLM) agent involves one or more LLMs in the back-end. In the front...
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial