FraudFox: Adaptable Fraud Detection in the Real World
Matthew Butler, Yi Fan, Christos Faloutsos
The proposed method (FraudFox) provides solutions to adversarial attacks in a resource constrained environment. We focus on questions like the...
2,077+ academic papers on AI security, attacks, and defenses
Showing 121–140 of 2,054 papers
Clear filtersMatthew Butler, Yi Fan, Christos Faloutsos
The proposed method (FraudFox) provides solutions to adversarial attacks in a resource constrained environment. We focus on questions like the...
Zhifang Zhang, Bojun Yang, Shuo He +5 more
Despite the strong multimodal performance, large vision-language models (LVLMs) are vulnerable during fine-tuning to backdoor attacks, where...
Zheng Gao, Yifan Yang, Xiaoyu Li +4 more
Watermarking the initial noise of diffusion models has emerged as a promising approach for image provenance, but content-independent noise patterns...
Sihao Ding
We introduce Colluding LoRA (CoLoRA), an attack in which each adapter appears benign and plausibly functional in isolation, yet their linear...
Zonghao Ying, Xiao Yang, Siyang Wu +7 more
The rapid evolution of Large Language Models (LLMs) into autonomous, tool-calling agents has fundamentally altered the cybersecurity landscape....
Jiangrong Wu, Zitong Yao, Yuhong Nan +1 more
Tool-augmented LLM agents increasingly rely on multi-step, multi-tool workflows to complete real tasks. This design expands the attack surface,...
Xiangkui Cao, Jie Zhang, Meina Kan +2 more
Large Vision-Language Models (LVLMs) have shown remarkable potential across a wide array of vision-language tasks, leading to their adoption in...
Darren Cheng, Wen-Kwang Tsao
Prompt injection remains one of the most practical attack vectors against LLM-integrated applications. We replicate the Microsoft LLMail-Inject...
Siddharth Srikanth, Freddie Liang, Sophie Hsu +9 more
Vision-Language-Action (VLA) models have significant potential to enable general-purpose robotic systems for a range of vision-language tasks....
Xinhai Wang, Shaopeng Fu, Shu Yang +3 more
Suffix jailbreak attacks serve as a systematic method for red-teaming Large Language Models (LLMs) but suffer from prohibitive computational costs,...
Davi Bonetto
State Space Models (SSMs) such as Mamba achieve linear-time sequence processing through input-dependent recurrence, but this mechanism introduces a...
Ninghui Li, Kaiyuan Zhang, Kyle Polley +1 more
This article, a lightly adapted version of Perplexity's response to NIST/CAISI Request for Information 2025-0035, details our observations and...
Alexandre Le Mercier, Thomas Demeester, Chris Develder
State space models (SSMs) like Mamba have gained significant traction as efficient alternatives to Transformers, achieving linear complexity while...
Haodong Zhao, Jinming Hu, Yijie Bai +6 more
Federated Language Model (FedLM) allows a collaborative learning without sharing raw data, yet it introduces a critical vulnerability, as every...
Chiyuan He, Zihuan Qiu, Fanman Meng +4 more
Continual learning of pretrained vision-language models (VLMs) is prone to catastrophic forgetting, yet current approaches adapt to new tasks without...
Chiyuan He, Zihuan Qiu, Fanman Meng +4 more
Continual learning of pretrained vision-language models (VLMs) is prone to catastrophic forgetting, yet current approaches adapt to new tasks without...
Sarbartha Banerjee, Prateek Sahu, Anjo Vahldiek-Oberwagner +2 more
Rapid progress in generative AI has given rise to Compound AI systems - pipelines comprised of multiple large language models (LLM), software tools...
Junjie Chu, Yiting Qu, Ye Leng +4 more
Large Language Models (LLMs) are increasingly trained to align with human values, primarily focusing on task level, i.e., refusing to execute...
Kele Xu, Yifan Wang, Ming Feng +5 more
Human-computer interaction has traditionally relied on the acoustic channel, a dependency that introduces systemic vulnerabilities to environmental...
J Alex Corll
Prompt injection defenses are often framed as semantic understanding problems and delegated to increasingly large neural detectors. For the first...
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act), and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial