Attack HIGH
Mohammad Karami, Mohammad Reza Nemati, Aidin Kazemi +3 more
Artificial intelligence (AI) has shown great potential in medical imaging, particularly for brain tumor detection using Magnetic Resonance Imaging...
6 months ago cs.LG cs.AI cs.CR
PDF
Attack HIGH
Hongwei Yao, Yun Xia, Shuo Shao +3 more
Large language models (LLMs) increasingly employ guardrails to enforce ethical, legal, and application-specific constraints on their outputs. While...
6 months ago cs.CR cs.CL
PDF
Attack HIGH
Geoff McDonald, Jonathan Bar Or
Large Language Models (LLMs) are increasingly deployed in sensitive domains including healthcare, legal services, and confidential communications,...
6 months ago cs.CR cs.AI
PDF
Attack HIGH
Yize Liu, Yunyun Hou, Aina Sui
Large Language Models (LLMs) have been widely deployed across various applications, yet their potential security and ethical risks have raised...
6 months ago cs.CR cs.CL
PDF
Attack HIGH
Amy Chang, Nicholas Conley, Harish Santhanalakshmi Ganesan +1 more
Open-weight models provide researchers and developers with accessible foundations for diverse downstream applications. We tested the safety and...
6 months ago cs.CR cs.LG
PDF
Attack HIGH
Rishi Rajesh Shah, Chen Henry Wu, Shashwat Saxena +3 more
Recent advances in long-context language models (LMs) have enabled million-token inputs, expanding their capabilities across complex tasks like...
6 months ago cs.CR cs.AI cs.CL
PDF
Attack HIGH
Chloe Loughridge, Paul Colognese, Avery Griffin +3 more
As AI deployments become more complex and high-stakes, it becomes increasingly important to be able to estimate their risk. AI control is one...
Attack HIGH
Aashray Reddy, Andrew Zagula, Nicholas Saban
Large Language Models (LLMs) remain vulnerable to jailbreaking attacks where adversarial prompts elicit harmful outputs. Yet most evaluations focus...
6 months ago cs.CL cs.AI cs.CR
PDF
Attack HIGH
Chen-Wei Chang, Shailik Sarkar, Hossein Salemi +7 more
Scam detection remains a critical challenge in cybersecurity as adversaries craft messages that evade automated filters. We propose a Hierarchical...
6 months ago cs.CR cs.AI
PDF
Attack HIGH
Daniyal Ganiuly, Assel Smaiyl
Large Language Models (LLMs) are increasingly used in intelligent systems that perform reasoning, summarization, and code generation. Their ability...
6 months ago cs.CR cs.AI
PDF
Attack HIGH
Hamin Koo, Minseon Kim, Jaehyung Kim
Identifying the vulnerabilities of large language models (LLMs) is crucial for improving their safety by addressing inherent weaknesses. Jailbreaks,...
Attack HIGH
Xin Liu, Aoyang Zhou, Aoyang Zhou
Visual-Language Pre-training (VLP) models have achieved significant performance across various downstream tasks. However, they remain vulnerable to...
6 months ago cs.CV cs.AI
PDF
Attack HIGH
Berk Atil, Rebecca J. Passonneau, Fred Morstatter
Large language models (LLMs) undergo safety alignment after training and tuning, yet recent work shows that safety can be bypassed through jailbreak...
Attack HIGH
Peng Ding, Jun Kuang, Wen Sun +5 more
Large language models (LLMs) remain vulnerable to jailbreaking attacks despite their impressive capabilities. Investigating these weaknesses is...
Attack HIGH
Phil Blandfort, Robert Graham
Activation probes are attractive monitors for AI systems due to low cost and latency, but their real-world robustness remains underexplored. We ask:...
6 months ago cs.LG cs.AI
PDF
Attack HIGH
Ruofan Liu, Yun Lin, Zhiyong Huang +1 more
Large language models (LLMs) are increasingly integrated into IT infrastructures, where they process user data according to predefined instructions....
6 months ago cs.CR cs.AI
PDF
Attack HIGH
Xin Yao, Haiyang Zhao, Yimin Chen +3 more
The Contrastive Language-Image Pretraining (CLIP) model has significantly advanced vision-language modeling by aligning image-text pairs from...
6 months ago cs.CV cs.CR cs.LG
PDF
Attack HIGH
Kayua Oleques Paim, Rodrigo Brandao Mansilha, Diego Kreutz +2 more
The rapid proliferation of Large Language Models (LLMs) has raised significant concerns about their security against adversarial attacks. In this...
6 months ago cs.CR cs.AI cs.LG
PDF
Attack HIGH
Alex Irpan, Alexander Matt Turner, Mark Kurzeja +2 more
An LLM's factuality and refusal training can be compromised by simple changes to a prompt. Models often adopt user beliefs (sycophancy) or satisfy...
6 months ago cs.LG cs.AI
PDF
Attack HIGH
David Schmotz, Sahar Abdelnabi, Maksym Andriushchenko
Enabling continual learning in LLMs remains a key unresolved research challenge. In a recent announcement, a frontier LLM company made a step towards...
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial