Tool MEDIUM
Zhibo Liang, Tianze Hu, Zaiye Chen +1 more
Autonomous Large Language Model (LLM) agents exhibit significant vulnerability to Indirect Prompt Injection (IPI) attacks. These attacks hijack agent...
3 months ago cs.AI cs.CL cs.CR
PDF
Attack MEDIUM
Donghang Duan, Xu Zheng, Yuefeng He +3 more
Current LLM-based text anonymization frameworks usually rely on remote API services from powerful LLMs, which creates an inherent privacy paradox:...
3 months ago cs.CR cs.CL
PDF
Defense MEDIUM
Jehyeok Yeon, Federico Cinus, Yifan Wu +1 more
Large language models (LLMs) face critical safety challenges, as they can be manipulated to generate harmful content through adversarial prompts and...
3 months ago cs.LG cs.AI
PDF
Tool MEDIUM
Arush Sachdeva, Rajendraprasad Saravanan, Gargi Sarkar +2 more
Cybercrime increasingly exploits human cognitive biases in addition to technical vulnerabilities, yet most existing analytical frameworks focus...
3 months ago cs.CR cs.AI cs.CY
PDF
Tool MEDIUM
Xianzong Wu, Xiaohong Li, Lili Quan +1 more
Large language models(LLMs) are increasingly expanding their real-world applications across domains, e.g., question answering, autonomous driving,...
3 months ago cs.AI cs.LG
PDF
Survey MEDIUM
Mehrab Hosain, Sabbir Alom Shuvo, Matthew Ogbe +4 more
The modern web stack, which is dominated by browser-based applications and API-first backends, now operates under an adversarial equilibrium where...
3 months ago cs.CR cs.AI cs.LG
PDF
Benchmark MEDIUM
Cheng Cheng, Jinqiu Yang
Code-focused Large Language Models (LLMs), such as CodeX and Star-Coder, have demonstrated remarkable capabilities in enhancing developer...
Defense MEDIUM
Sheng Liu, Panos Papadimitratos
Federated Learning (FL) has drawn the attention of the Intelligent Transportation Systems (ITS) community. FL can train various models for ITS tasks,...
3 months ago cs.CR cs.AI
PDF
Defense MEDIUM
Jason Vega, Gagandeep Singh
A frustratingly easy technique known as the prefilling attack has been shown to effectively circumvent the safety alignment of frontier LLMs by...
3 months ago cs.CR cs.AI
PDF
Defense MEDIUM
Jiale Zhao, Xing Mou, Jinlin Wu +7 more
Medical Multimodal Large Language Models (Medical MLLMs) have achieved remarkable progress in specialized medical tasks; however, research into their...
3 months ago cs.LG cs.AI cs.CL
PDF
Benchmark MEDIUM
Ashish Hooda, Mihai Christodorescu, Chuangang Ren +3 more
Machine learning (ML) models for code clone detection determine whether two pieces of code are semantically equivalent, which in turn is a key...
3 months ago cs.SE cs.AI
PDF
Survey MEDIUM
Wei Zhao, Zhe Li, Jun Sun
Large Language Models (LLMs) exhibit remarkable capabilities but remain vulnerable to adversarial manipulations such as jailbreaking, where crafted...
3 months ago cs.CR cs.AI
PDF
Tool MEDIUM
Eranga Bandara, Amin Hass, Ross Gore +8 more
AI agent-based systems are becoming increasingly integral to modern software architectures, enabling autonomous decision-making, dynamic task...
3 months ago cs.AI cs.CR
PDF
Attack MEDIUM
Jinbo Liu, Defu Cao, Yifei Wei +6 more
Graph topology is a fundamental determinant of memory leakage in multi-agent LLM systems, yet its effects remain poorly quantified. We introduce MAMA...
3 months ago cs.CR cs.AI cs.CL
PDF
Benchmark MEDIUM
Chenlin Xu, Lei Zhang, Lituan Wang +5 more
Due to the scarcity of annotated data and the substantial computational costs of model, conventional tuning methods in medical image segmentation...
Defense MEDIUM
Biagio Montaruli, Luca Compagna, Serena Elisa Ponta +1 more
The rise of supply chain attacks via malicious Python packages demands robust detection solutions. Current approaches, however, overlook two critical...
3 months ago cs.CR cs.LG
PDF
Benchmark MEDIUM
Yizhou Zhao, Zhiwei Steven Wu, Adam Block
Watermarking aims to embed hidden signals in generated text that can be reliably detected when given access to a secret key. Open-weight language...
3 months ago cs.LG cs.AI cs.CR
PDF
Attack MEDIUM
Itay Yona, Amir Sarid, Michael Karasik +1 more
We introduce $\textbf{Doublespeak}$, a simple in-context representation hijacking attack against large language models (LLMs). The attack works by...
3 months ago cs.CL cs.AI cs.CR
PDF
Benchmark MEDIUM
Tengyun Ma, Jiaqi Yao, Daojing He +4 more
Large Language Models (LLMs) have emerged as powerful tools for diverse applications. However, their uniform token processing paradigm introduces...
3 months ago cs.CR cs.AI
PDF
Attack MEDIUM
Hanxiu Zhang, Yue Zheng
The protection of Intellectual Property (IP) in Large Language Models (LLMs) represents a critical challenge in contemporary AI research. While...
3 months ago cs.CR cs.AI cs.CL
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial