Attack HIGH
Fortunatus Aabangbio Wulnye, Justice Owusu Agyemang, Kwame Opuni-Boachie Obour Agyekum +3 more
Ensuring the reliability of machine learning-based intrusion detection systems remains a critical challenge in Internet of Things (IoT) environments,...
3 weeks ago cs.CR cs.AI
PDF
Attack HIGH
Andrii Vakhnovskyi
The United States designates Food and Agriculture as one of sixteen critical infrastructure sectors, yet no mandatory cybersecurity requirements...
4 weeks ago cs.CR eess.SY
PDF
Attack HIGH
Yingying Zhao, Chengyin Hu, Qike Zhang +7 more
Vision-Language Models (VLMs) have shown remarkable performance, yet their security remains insufficiently understood. Existing adversarial studies...
Attack HIGH
Jianhao Chen, Haoyang Chen, Hanjie Zhao +2 more
The rapid evolution of Vision-Language Models (VLMs) has catalyzed unprecedented capabilities in artificial intelligence; however, this continuous...
4 weeks ago cs.AI cs.MM
PDF
Attack HIGH
Junyu Ren, Xingjian Pan, Wensheng Gan +1 more
Prompt injection has emerged as a critical security threat to large language models (LLMs), yet existing studies predominantly focus on...
Attack HIGH
Ravikumar Balakrishnan, Sanket Mendapara, Ankit Garg
We study typographic prompt injection attacks on vision-language models (VLMs), where adversarial text is rendered as images to bypass safety...
Attack HIGH
Yulin Chen, Tri Cao, Haoran Li +7 more
Web agents powered by vision-language models (VLMs) enable autonomous interaction with web environments by perceiving and acting on both visual and...
Attack HIGH
Qingchao Shen, Zibo Xiao, Lili Huang +3 more
Large Language Models (LLMs) are increasingly deployed across diverse domains, yet their vulnerability to jailbreak attacks, where adversarial inputs...
4 weeks ago cs.CR cs.AI cs.SE
PDF
Attack HIGH
Dominik Blain
We present COBALT-TLA, a neuro-symbolic verification loop that pairs an LLM with TLC, the TLA+ model checker, in an automated REPL. The LLM generates...
4 weeks ago cs.CR cs.LO
PDF
Attack HIGH
Gamze Kirman Tokgoz, Onat Gungor, Tajana Rosing +1 more
Time-series forecasting aims to predict future values by modeling temporal dependencies in historical observations. It is a critical component of...
4 weeks ago cs.LG cs.CR
PDF
Tool HIGH
Wei Zhao, Zhe Li, Peixin Zhang +1 more
Tool-augmented Large Language Model (LLM) agents have demonstrated impressive capabilities in automating complex, multi-step real-world tasks, yet...
4 weeks ago cs.CR cs.AI
PDF
Tool HIGH
Yihao Zhang, Kai Wang, Jiangrong Wu +7 more
Large Language Models (LLMs) face prominent security risks from jailbreaking, a practice that manipulates models to bypass built-in security...
4 weeks ago cs.CR cs.AI cs.CL
PDF
Attack HIGH
Navid Azimi, Aditya Prakash, Yao Wang +1 more
Deep neural networks remain highly vulnerable to adversarial perturbations, limiting their reliability in security- and safety-critical applications....
1 months ago cs.CR cs.AI cs.CV
PDF
Attack HIGH
Yuanbo Xie, Yingjie Zhang, Yulin Li +5 more
Retrieval-Augmented Generation (RAG) systems augment large language models with external knowledge, yet introduce a critical security vulnerability:...
1 months ago cs.CR cs.AI cs.CL
PDF
Tool HIGH
Vu Tuan Truong, Long Bao Le
Large Language Models (LLMs), despite their impressive capabilities across domains, have been shown to be vulnerable to backdoor attacks. Prior...
1 months ago cs.CR cs.AI
PDF
Benchmark HIGH
Runpeng Geng, Chenlong Yin, Yanting Wang +2 more
Prompt injection attacks pose serious security risks across a wide range of real-world applications. While receiving increasing attention, the...
1 months ago cs.CR cs.AI cs.CL
PDF
Defense HIGH
Kevin Lira, Baldoino Fonseca, Davy Baía +2 more
Large Language Models (LLMs) have been a promising way for automated vulnerability detection. However, most prior studies have explored the use of...
1 months ago cs.SE cs.CR
PDF
Attack HIGH
Hanzhi Liu, Chaofan Shou, Hongbo Wen +3 more
Large language model (LLM) agents increasingly rely on third-party API routers to dispatch tool-calling requests across multiple upstream providers....
Survey HIGH
Yuming Xu, Mingtao Zhang, Zhuohan Ge +5 more
Retrieval-augmented generation (RAG) significantly enhances large language models (LLMs) but introduces novel security risks through external...
1 months ago cs.CR cs.AI
PDF
Attack HIGH
Wenpeng Xing, Moran Fang, Guangtai Wang +2 more
While Large Language Models (LLMs) have achieved remarkable performance, they remain vulnerable to jailbreak attacks that circumvent safety...
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial