Recent studies show that gradient-based universal image jailbreaks on vision-language models (VLMs) exhibit little or no cross-model transferability,...
This paper proposes a jailbreaking prompt detection method for large language models (LLMs) to defend against jailbreak attacks. Although recent LLMs...
This paper proposes a guaranteed defense method for large language models (LLMs) to safeguard against jailbreaking attacks. Drawing inspiration from...
Intent-obfuscation-based jailbreak attacks on multimodal large language models (MLLMs) transform a harmful query into a concealed multimodal input to...
Wesley Hanwen Deng, Mingxi Yan, Sunnie S. Y. Kim +5 more
Recent developments in AI safety research have called for red-teaming methods that effectively surface potential risks posed by generative AI models,...
Raja Sekhar Rao Dheekonda, Will Pearce, Nick Landers
AI systems are entering critical domains like healthcare, finance, and defense, yet remain vulnerable to adversarial attacks. While AI red teaming is...
We show that remotely hosted applications employing in-context learning when augmented with a retrieval function to select in-context examples can be...