Attack MEDIUM
Justin Albrethsen, Yash Datta, Kunal Kumar +1 more
While Large Language Model (LLM) capabilities have scaled, safety guardrails remain largely stateless, treating multi-turn dialogues as a series of...
2 months ago cs.AI cs.ET cs.LG
PDF
Defense MEDIUM
Sasha Behrouzi, Lichao Wu, Mohamadreza Rostami +1 more
Safety alignment is essential for the responsible deployment of large language models (LLMs). Yet, existing approaches often rely on heavyweight...
2 months ago cs.CR cs.LG
PDF
Benchmark MEDIUM
Simon Lermen, Daniel Paleka, Joshua Swanson +3 more
We show that large language models can be used to perform at-scale deanonymization. With full Internet access, our agent can re-identify Hacker News...
2 months ago cs.CR cs.AI cs.LG
PDF
Attack MEDIUM
Nils Palumbo, Sarthak Choudhary, Jihye Choi +2 more
LLM-based agents are increasingly being deployed in contexts requiring complex authorization policies: customer service protocols, approval...
2 months ago cs.CR cs.AI cs.MA
PDF
Benchmark MEDIUM
Michael Cunningham
We present a practical system for privacy-aware large language model (LLM) inference that splits a transformer between a trusted local GPU and an...
2 months ago cs.CR cs.DC
PDF
Benchmark MEDIUM
Nivya Talokar, Ayush K Tarun, Murari Mandal +2 more
LLM-based agents execute real-world workflows via tools and memory. These affordances enable ill-intended adversaries to also use these agents to...
2 months ago cs.CL cs.LG
PDF
Benchmark MEDIUM
Johannes Bertram, Jonas Geiping
We introduce NESSiE, the NEceSsary SafEty benchmark for large language models (LLMs). With minimal test cases of information and access security,...
2 months ago cs.CR cs.SE
PDF
Defense MEDIUM
Ahmed Ryan, Ibrahim Khalil, Abdullah Al Jahid +4 more
The prevalence of malicious packages in open-source repositories, such as PyPI, poses a critical threat to the software supply chain. While Large...
2 months ago cs.CR cs.SE
PDF
Attack MEDIUM
Yuval Felendler, Parth A. Gandhi, Idan Habler +2 more
Model Context Protocols (MCPs) provide a unified platform for agent systems to discover, select, and orchestrate tools across heterogeneous execution...
2 months ago cs.CR cs.AI
PDF
Benchmark MEDIUM
Shahriar Golchin, Marc Wetter
We systematically evaluate the quality of widely used AI safety datasets from two perspectives: in isolation and in practice. In isolation, we...
2 months ago cs.CR cs.AI cs.CL
PDF
Benchmark MEDIUM
Haodong Zhao, Jinming Hu, Gongshen Liu
Federated learning security research has predominantly focused on backdoor threats from a minority of malicious clients that intentionally corrupt...
Attack MEDIUM
Varun Pratap Bhardwaj
We present SuperLocalMemory, a local-first memory system for multi-agent AI that defends against OWASP ASI06 memory poisoning through architectural...
2 months ago cs.AI cs.CR
PDF
Attack MEDIUM
Chengzhi Hu, Jonas Dornbusch, David Lüdke +2 more
Adversarial training for LLMs is one of the most promising methods to reliably improve robustness against adversaries. However, despite significant...
2 months ago cs.LG cs.AI cs.CR
PDF
Defense MEDIUM
David Puertolas Merenciano, Ekaterina Vasyagina, Raghav Dixit +4 more
LoRA adapters let users fine-tune large language models (LLMs) efficiently. However, LoRA adapters are shared through open repositories like Hugging...
2 months ago cs.CR cs.AI cs.CL
PDF
Attack MEDIUM
Yohan Lee, Jisoo Jang, Seoyeon Choi +2 more
Tool-using LLM agents increasingly coordinate real workloads by selecting and chaining third-party tools based on text-visible metadata such as tool...
2 months ago cs.CL cs.CR
PDF
Defense MEDIUM
Tianyu Chen, Dongrui Liu, Xia Hu +2 more
Clawdbot is a self-hosted, tool-using personal AI agent with a broad action space spanning local execution and web-mediated workflows, which raises...
2 months ago cs.CR cs.AI
PDF
Attack MEDIUM
Zhenhong Zhou, Yuanhe Zhang, Hongwei Cai +6 more
The Model Context Protocol (MCP) standardizes tool use for LLM-based agents and enable third-party servers. This openness introduces a security...
2 months ago cs.CR cs.CL
PDF
Survey MEDIUM
Matic Korun
We propose a geometric taxonomy of large language model hallucinations based on observable signatures in token embedding cluster structure. By...
Benchmark MEDIUM
Max Fomin
Detecting prompt injection and jailbreak attacks is critical for deploying LLM-based agents safely. As agents increasingly process untrusted data...
Attack MEDIUM
Mario Marín Caballero, Miguel Betancourt Alonso, Daniel Díaz-López +3 more
The most valuable asset of any cloud-based organization is data, which is increasingly exposed to sophisticated cyberattacks. Until recently, the...
2 months ago cs.CR cs.AI
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial