Attack HIGH
Md Rysul Kabir, Zoran Tiganj
Open-weight language models can be rendered unsafe through several distinct interventions, but the resulting models may differ substantially in...
3 weeks ago cs.CR cs.AI cs.CL
PDF
Attack HIGH
Thamilvendhan Munirathinam
Current open-source prompt-injection detectors converge on two architectural choices: regular-expression pattern matching and fine-tuned transformer...
3 weeks ago cs.CR cs.CL
PDF
Attack HIGH
Wentao Zhang, Yan Zhuang, ZhuHang Zheng +3 more
Existing jamming attacks on Retrieval-Augmented Generation (RAG) systems typically induce explicit refusals or denial-of-service behaviors, which are...
3 weeks ago cs.CR cs.AI
PDF
Attack HIGH
Jin Zhao, Marta Knežević, Tanja Käser
Large Language Models (LLMs) are increasingly used in education, yet their default helpfulness often conflicts with pedagogical principles. Prior...
3 weeks ago cs.CR cs.AI
PDF
Attack MEDIUM
Jianming Tong, Hanshen Xiao, Krishna Kumar Nair +5 more
Multi-user virtual reality enables immersive interaction. However, rendering avatars for numerous participants on each headset incurs prohibitive...
3 weeks ago cs.CR cs.AR cs.CV
PDF
Attack HIGH
Haochun Tang, Yuliang Yan, Jiahua Lu +2 more
Cost-aware routing dynamically dispatches user queries to models of varying capability to balance performance and inference cost. However, the...
3 weeks ago cs.CR cs.AI cs.CL
PDF
Attack MEDIUM
Xuanli He, Bilgehan Sel, Faizan Ali +3 more
Large Language Models (LLMs) are increasingly exposed to adaptive jailbreaking, particularly in high-stakes Chemical, Biological, Radiological, and...
3 weeks ago cs.CL cs.CR
PDF
Attack HIGH
Meng Chen, Kun Wang, Li Lu +2 more
Modern Large audio-language models (LALMs) power intelligent voice interactions by tightly integrating audio and text. This integration, however,...
3 weeks ago cs.CR cs.AI cs.SD
PDF
Attack MEDIUM
Firas Ben Hmida, Philemon Hailemariam, Kashif Ali Khan +1 more
Deep neural networks (DNNs) remain largely opaque at inference time, limiting our ability to detect and diagnose malicious input manipulations such...
Attack HIGH
Fortunatus Aabangbio Wulnye, Justice Owusu Agyemang, Kwame Opuni-Boachie Obour Agyekum +3 more
Ensuring the reliability of machine learning-based intrusion detection systems remains a critical challenge in Internet of Things (IoT) environments,...
3 weeks ago cs.CR cs.AI
PDF
Attack MEDIUM
Pavel Chizhov, Egor Bogomolov, Ivan P. Yamshchikov
Efficiency and safety of Large Language Models (LLMs), among other factors, rely on the quality of tokenization. A good tokenizer not only improves...
Attack HIGH
Andrii Vakhnovskyi
The United States designates Food and Agriculture as one of sixteen critical infrastructure sectors, yet no mandatory cybersecurity requirements...
4 weeks ago cs.CR eess.SY
PDF
Attack HIGH
Yingying Zhao, Chengyin Hu, Qike Zhang +7 more
Vision-Language Models (VLMs) have shown remarkable performance, yet their security remains insufficiently understood. Existing adversarial studies...
Attack MEDIUM
Shaopeng Fu, Di Wang
Adversarial training (AT) is an effective defense for large language models (LLMs) against jailbreak attacks, but performing AT on LLMs is costly. To...
4 weeks ago cs.LG cs.CR stat.ML
PDF
Attack MEDIUM
Anasuya Chattopadhyay, Daniel Reti, Hans D. Schotten
Cloud networks increasingly rely on machine learning based Network Intrusion Detection Systems to defend against evolving cyber threats. However,...
4 weeks ago cs.LG cs.CR
PDF
Attack HIGH
Jianhao Chen, Haoyang Chen, Hanjie Zhao +2 more
The rapid evolution of Vision-Language Models (VLMs) has catalyzed unprecedented capabilities in artificial intelligence; however, this continuous...
4 weeks ago cs.AI cs.MM
PDF
Attack MEDIUM
Vladimir A. Mazin, Mikhail A. Zorin, Dmitrii S. Korzh +3 more
Passwords still remain a dominant authentication method, yet their security is routinely subverted by predictable user choices and large-scale...
4 weeks ago cs.CR cs.AI
PDF
Attack HIGH
Junyu Ren, Xingjian Pan, Wensheng Gan +1 more
Prompt injection has emerged as a critical security threat to large language models (LLMs), yet existing studies predominantly focus on...
Attack HIGH
Ravikumar Balakrishnan, Sanket Mendapara, Ankit Garg
We study typographic prompt injection attacks on vision-language models (VLMs), where adversarial text is rendered as images to bypass safety...
Attack HIGH
Yulin Chen, Tri Cao, Haoran Li +7 more
Web agents powered by vision-language models (VLMs) enable autonomous interaction with web environments by perceiving and acting on both visual and...
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial