Benchmark HIGH
Caleb Gross
Security research is fundamentally a problem of resource constraint and consequent prioritization. There is simply too much attack surface and too...
3 months ago cs.CR cs.IR
PDF
Attack HIGH
Shiji Zhao, Shukun Xiong, Yao Huang +7 more
Multimodal Large Language Models (MLLMs) are widely used in various fields due to their powerful cross-modal comprehension and generation...
Attack HIGH
Weikai Lu, Ziqian Zeng, Kehua Zhang +5 more
Multimodal Large Language Models (MLLMs) are increasingly vulnerable to multimodal Indirect Prompt Injection (IPI) attacks, which embed malicious...
3 months ago cs.CR cs.MM
PDF
Defense MEDIUM
Jason Vega, Gagandeep Singh
A frustratingly easy technique known as the prefilling attack has been shown to effectively circumvent the safety alignment of frontier LLMs by...
3 months ago cs.CR cs.AI
PDF
Benchmark HIGH
Xiuyuan Chen, Jian Zhao, Yuxiang He +10 more
While the deployment of large language models (LLMs) in high-value industries continues to expand, the systematic assessment of their safety against...
Defense MEDIUM
Jiale Zhao, Xing Mou, Jinlin Wu +7 more
Medical Multimodal Large Language Models (Medical MLLMs) have achieved remarkable progress in specialized medical tasks; however, research into their...
3 months ago cs.LG cs.AI cs.CL
PDF
Benchmark MEDIUM
Ashish Hooda, Mihai Christodorescu, Chuangang Ren +3 more
Machine learning (ML) models for code clone detection determine whether two pieces of code are semantically equivalent, which in turn is a key...
3 months ago cs.SE cs.AI
PDF
Attack HIGH
Fan Yang
Large Language Models (LLMs) have demonstrated exceptional performance across various tasks, but their security vulnerabilities can be exploited by...
3 months ago cs.CR cs.AI
PDF
Tool LOW
Zag ElSayed, Craig Erickson, Ernest Pedapati
Healthcare AI systems have historically faced challenges in merging contextual reasoning, long-term state management, and human-verifiable workflows...
3 months ago cs.AI q-bio.QM
PDF
Benchmark LOW
Feijiang Han
Malicious WebShells pose a significant and evolving threat by compromising critical digital infrastructures and endangering public services in...
3 months ago cs.CR cs.AI cs.LG
PDF
Tool HIGH
M Zeeshan, Saud Satti
Multimodal Artificial Intelligence (AI) systems, particularly Vision-Language Models (VLMs), have become integral to critical applications ranging...
3 months ago cs.AI cs.MA
PDF
Survey MEDIUM
Wei Zhao, Zhe Li, Jun Sun
Large Language Models (LLMs) exhibit remarkable capabilities but remain vulnerable to adversarial manipulations such as jailbreaking, where crafted...
3 months ago cs.CR cs.AI
PDF
Tool MEDIUM
Eranga Bandara, Amin Hass, Ross Gore +8 more
AI agent-based systems are becoming increasingly integral to modern software architectures, enabling autonomous decision-making, dynamic task...
3 months ago cs.AI cs.CR
PDF
Attack MEDIUM
Jinbo Liu, Defu Cao, Yifei Wei +6 more
Graph topology is a fundamental determinant of memory leakage in multi-agent LLM systems, yet its effects remain poorly quantified. We introduce MAMA...
3 months ago cs.CR cs.AI cs.CL
PDF
Benchmark MEDIUM
Chenlin Xu, Lei Zhang, Lituan Wang +5 more
Due to the scarcity of annotated data and the substantial computational costs of model, conventional tuning methods in medical image segmentation...
Defense MEDIUM
Biagio Montaruli, Luca Compagna, Serena Elisa Ponta +1 more
The rise of supply chain attacks via malicious Python packages demands robust detection solutions. Current approaches, however, overlook two critical...
3 months ago cs.CR cs.LG
PDF
Tool LOW
Peter B. Walker, Hannah Davidson, Aiden Foster +3 more
Large Language Models (LLMs) have transformed natural language processing and hold growing promise for advancing science, healthcare, and...
Benchmark MEDIUM
Yizhou Zhao, Zhiwei Steven Wu, Adam Block
Watermarking aims to embed hidden signals in generated text that can be reliably detected when given access to a secret key. Open-weight language...
3 months ago cs.LG cs.AI cs.CR
PDF
Attack MEDIUM
Itay Yona, Amir Sarid, Michael Karasik +1 more
We introduce $\textbf{Doublespeak}$, a simple in-context representation hijacking attack against large language models (LLMs). The attack works by...
3 months ago cs.CL cs.AI cs.CR
PDF
Benchmark MEDIUM
Tengyun Ma, Jiaqi Yao, Daojing He +4 more
Large Language Models (LLMs) have emerged as powerful tools for diverse applications. However, their uniform token processing paradigm introduces...
3 months ago cs.CR cs.AI
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial