Benchmark MEDIUM
Mohamed Shaaban, Mohamed Elmahallawy
Federated learning (FL) enables collaborative training across organizational silos without sharing raw data, making it attractive for...
3 months ago cs.CR cs.CL
PDF
Attack MEDIUM
Akshat Naik, Jay Culligan, Yarin Gal +4 more
As Large Language Model (LLM) agents become more capable, their coordinated use in the form of multi-agent systems is anticipated to emerge as a...
Benchmark MEDIUM
Anudeep Das, Prach Chantasantitam, Gurjot Singh +3 more
Large language models (LLMs) are increasingly deployed in settings where inducing a bias toward a certain topic can have significant consequences,...
3 months ago cs.CR cs.AI
PDF
Benchmark MEDIUM
Xu Li, Simon Yu, Minzhou Pan +5 more
LLM-based agents are becoming increasingly capable, yet their safety lags behind. This creates a gap between what agents can do and should do. This...
3 months ago cs.CR cs.AI cs.CL
PDF
Attack MEDIUM
Yiran Gao, Kim Hammar, Tao Li
Rapidly evolving cyberattacks demand incident response systems that can autonomously learn and adapt to changing threats. Prior work has extensively...
3 months ago cs.CR cs.AI
PDF
Defense MEDIUM
George Alexandru Adam, Alexander Cui, Edwin Thomas +7 more
While historical considerations surrounding text authenticity revolved primarily around plagiarism, the advent of large language models (LLMs) has...
Benchmark MEDIUM
Tailia Malloy, Tegawende F. Bissyande
Large Language Models are expanding beyond being a tool humans use and into independent agents that can observe an environment, reason about...
3 months ago cs.CR cs.AI
PDF
Benchmark MEDIUM
Nataša Krčo, Zexi Yao, Matthieu Meeus +1 more
Data containing personal information is increasingly used to train, fine-tune, or query Large Language Models (LLMs). Text is typically scrubbed of...
3 months ago cs.CL cs.AI cs.CR
PDF
Attack MEDIUM
Oguzhan Baser, Elahe Sadeghi, Eric Wang +5 more
Most large language models (LLMs) run on external clouds: users send a prompt, pay for inference, and must trust that the remote GPU executes the LLM...
3 months ago cs.CR cs.AI
PDF
Defense MEDIUM
Zhaoxin Wang, Jiaming Liang, Fengbin Zhu +5 more
Large language models (LLMs) and multimodal LLMs are typically safety-aligned before release to prevent harmful content generation. However, recent...
Defense MEDIUM
Yujun Zhou, Yue Huang, Han Bao +8 more
While most AI alignment research focuses on preventing models from generating explicitly harmful content, a more subtle risk is emerging:...
3 months ago cs.LG cs.CL
PDF
Survey MEDIUM
Varpu Vehomäki, Kimmo K. Kaski
Understanding cyber security is increasingly important for individuals and organizations. However, a lot of information related to cyber security can...
Defense MEDIUM
Christian Rondanini, Barbara Carminati, Elena Ferrari +2 more
The proliferation of edge devices has created an urgent need for security solutions capable of detecting malware in real time while operating under...
3 months ago cs.CR cs.AI cs.DC
PDF
Benchmark MEDIUM
Faouzi El Yagoubi, Ranwa Al Mallah, Godwin Badu-Marfo
Multi-agent Large Language Model (LLM) systems create privacy risks that current benchmarks cannot measure. When agents coordinate on tasks,...
Defense MEDIUM
Md Sazedur Rahman, Mizanur Rahman Jewel, Sanjay Madria
Mining is rapidly evolving into an AI driven cyber physical ecosystem where safety and operational reliability depend on robust perception,...
3 months ago cs.CR cs.DC
PDF
Benchmark MEDIUM
Aashish Kolluri, Rishi Sharma, Manuel Costa +5 more
Indirect prompt injection attacks threaten AI agents that execute consequential actions, motivating deterministic system-level defenses. Such...
3 months ago cs.CR cs.LG
PDF
Benchmark MEDIUM
Arpit Singh Gautam, Kailash Talreja, Saurabh Jha
Large Language Models (LLMs) frequently hallucinate plausible but incorrect assertions, a vulnerability often missed by uncertainty metrics when...
3 months ago cs.CL cs.AI
PDF
Attack MEDIUM
Abhishek Saini, Haolin Jiang, Hang Liu
The deployment of large language models (LLMs) on third-party devices requires new ways to protect model intellectual property. While Trusted...
3 months ago cs.CR cs.AR
PDF
Benchmark MEDIUM
Zhenhua Zou, Sheng Guo, Qiuyang Zhan +6 more
The evolution of Large Language Models (LLMs) has shifted mobile computing from App-centric interactions to system-level autonomous agents. Current...
3 months ago cs.CR cs.AI
PDF
Benchmark MEDIUM
Xinguo Feng, Zhongkui Ma, Zihan Wang +2 more
Training and fine-tuning large-scale language models largely benefit from collaborative learning, but the approach has been proven vulnerable to...
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial