Attack MEDIUM
George Mikros
Large language models (LLMs) present a dual challenge for forensic linguistics. They serve as powerful analytical tools enabling scalable corpus...
5 months ago cs.CL cs.CY
PDF
Attack MEDIUM
Sima Jafarikhah, Daniel Thompson, Eva Deans +2 more
Manual vulnerability scoring, such as assigning Common Vulnerability Scoring System (CVSS) scores, is a resource-intensive process that is often...
5 months ago cs.CR cs.AI cs.PL
PDF
Attack MEDIUM
Donghang Duan, Xu Zheng, Yuefeng He +3 more
Current LLM-based text anonymization frameworks usually rely on remote API services from powerful LLMs, which creates an inherent privacy paradox:...
5 months ago cs.CR cs.CL
PDF
Attack MEDIUM
Jinbo Liu, Defu Cao, Yifei Wei +6 more
Graph topology is a fundamental determinant of memory leakage in multi-agent LLM systems, yet its effects remain poorly quantified. We introduce MAMA...
5 months ago cs.CR cs.AI cs.CL
PDF
Attack MEDIUM
Itay Yona, Amir Sarid, Michael Karasik +1 more
We introduce $\textbf{Doublespeak}$, a simple in-context representation hijacking attack against large language models (LLMs). The attack works by...
5 months ago cs.CL cs.AI cs.CR
PDF
Attack MEDIUM
Hanxiu Zhang, Yue Zheng
The protection of Intellectual Property (IP) in Large Language Models (LLMs) represents a critical challenge in contemporary AI research. While...
5 months ago cs.CR cs.AI cs.CL
PDF
Attack MEDIUM
Thomas Rivasseau
Current research on operator control of Large Language Models improves model robustness against adversarial attacks and misbehavior by training on...
Attack MEDIUM
Adel Chehade, Edoardo Ragusa, Paolo Gastaldo +1 more
Traffic classification (TC) plays a critical role in cybersecurity, particularly in IoT and embedded contexts, where inspection must often occur...
5 months ago cs.NI cs.CR cs.LG
PDF
Attack MEDIUM
Zixia Wang, Gaojie Jin, Jia Hu +1 more
Recent advancements in Large Language Models (LLMs) have led to their widespread adoption in daily applications. Despite their impressive...
5 months ago cs.LG cs.AI
PDF
Attack MEDIUM
Alexander Boyd, Franz Nowak, David Hyland +2 more
World models have been recently proposed as sandbox environments in which AI agents can be trained and evaluated before deployment. Although...
Attack MEDIUM
Aaron Sandoval, Cody Rushing
The field of AI Control seeks to develop robust control protocols, deployment safeguards for untrusted AI which may be intentionally subversive....
5 months ago cs.CR cs.CL
PDF
Attack MEDIUM
Adeela Bashir, The Anh han, Zia Ush Shamszaman
The integration of large language models (LLMs) into healthcare IoT systems promises faster decisions and improved medical support. LLMs are also...
5 months ago cs.CR cs.LG cs.MA
PDF
Attack MEDIUM
K. J. Kevin Feng, Tae Soo Kim, Rock Yuren Pang +3 more
AI agents that take actions in their environment autonomously over extended time horizons require robust governance interventions to curb their...
5 months ago cs.CY cs.AI
PDF
Attack MEDIUM
Tong Wu, Weibin Wu, Zibin Zheng
Equipped with various tools and knowledge, GPTs, one kind of customized AI agents based on OpenAI's large language models, have illustrated great...
5 months ago cs.CR cs.SE
PDF
Attack MEDIUM
Zeng Wang, Minghao Shao, Akashdeep Saha +4 more
Graph neural networks (GNNs) have shown promise in hardware security by learning structural motifs from netlist graphs. However, this reliance on...
5 months ago cs.CR cs.AI
PDF
Attack MEDIUM
Herman Errico, Jiquan Ngiam, Shanita Sojan
The Model Context Protocol (MCP) replaces static, developer-controlled API integrations with more dynamic, user-driven agent systems, which also...
Attack MEDIUM
Sidahmed Benabderrahmane, James Cheney, Talal Rahwan
Advanced Persistent Threats (APTs) pose a significant challenge in cybersecurity due to their stealthy and long-term nature. Modern supervised...
5 months ago cs.LG cs.AI cs.CR
PDF
Attack MEDIUM
Steven Peh
Large Language Models (LLMs) remain vulnerable to prompt injection attacks, representing the most significant security threat in production...
5 months ago cs.CR cs.AI
PDF
Attack MEDIUM
Adarsh Kumarappan, Ayushi Mehrotra
The SmoothLLM defense provides a certification guarantee against jailbreaking attacks, but it relies on a strict "k-unstable" assumption that rarely...
5 months ago cs.LG cs.AI
PDF
Attack MEDIUM
Itay Hazan, Yael Mathov, Guy Shtar +2 more
Securing AI agents powered by Large Language Models (LLMs) represents one of the most critical challenges in AI security today. Unlike traditional...
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial