Tool HIGH
Junda Lin, Zhaomeng Zhou, Zhi Zheng +4 more
LLM agents operating in open environments face escalating risks from indirect prompt injection, particularly within the tool stream where manipulated...
2 months ago cs.CR cs.AI
PDF
Attack HIGH
Ahmad Alobaid, Martí Jordà Roca, Carlos Castillo +1 more
The availability of Large Language Models (LLMs) has led to a new generation of powerful chatbots that can be developed at relatively low cost. As...
2 months ago cs.CR cs.AI
PDF
Tool HIGH
Jingxiao Yang, Ping He, Tianyu Du +2 more
Recent advances in software vulnerability detection have been driven by Language Model (LM)-based approaches. However, these models remain vulnerable...
2 months ago cs.CR cs.AI
PDF
Attack HIGH
Balachandra Devarangadi Sunil, Isheeta Sinha, Piyush Maheshwari +3 more
Large language model agents equipped with persistent memory are vulnerable to memory poisoning attacks, where adversaries inject malicious...
2 months ago cs.CR cs.MA
PDF
Tool HIGH
Zhaoqi Wang, Zijian Zhang, Daqing He +5 more
Large language models (LLMs) have demonstrated remarkable capabilities across diverse applications, however, they remain critically vulnerable to...
2 months ago cs.CR cs.AI
PDF
Attack HIGH
Songze Li, Ruishi He, Xiaojun Jia +2 more
Large Language Models (LLMs) face a significant threat from multi-turn jailbreak attacks, where adversaries progressively steer conversations to...
2 months ago cs.CR cs.LG
PDF
Attack HIGH
Badhan Chandra Das, Md Tasnim Jawad, Joaquin Molto +2 more
In recent years, the security vulnerabilities of Multi-modal Large Language Models (MLLMs) have become a serious concern in the Generative Artificial...
2 months ago cs.CR cs.AI
PDF
Tool HIGH
Keerthi Kumar. M, Swarun Kumar Joginpelly, Sunil Khemka +2 more
Background: Cyber-attacks have evolved rapidly in recent years, many individuals and business owners have been affected by cyber-attacks in various...
2 months ago cs.CR cs.AI cs.LG
PDF
Tool HIGH
Qiang Yu, Xinran Cheng, Chuanyi Liu
As LLM agents transition from digital assistants to physical controllers in autonomous systems and robotics, they face an escalating threat from...
2 months ago cs.AI cs.CL cs.CR
PDF
Tool HIGH
Hongming Fei, Zilong Hu, Prosanta Gope +1 more
Physical Unclonable Functions (PUFs) serve as lightweight, hardware-intrinsic entropy sources widely deployed in IoT security applications. However,...
Attack HIGH
Zhiyuan Chang, Mingyang Li, Yuekai Huang +6 more
Large language model (LLM)-integrated applications have become increasingly prevalent, yet face critical security vulnerabilities from prompt...
2 months ago cs.AI cs.CR
PDF
Attack HIGH
Hoagy Cunningham, Jerry Wei, Zihan Wang +26 more
We introduce enhanced Constitutional Classifiers that deliver production-grade jailbreak robustness with dramatically reduced computational costs and...
2 months ago cs.CR cs.AI
PDF
Tool HIGH
Yunhao Feng, Yige Li, Yutao Wu +6 more
Large language model (LLM) agents execute tasks through multi-step workflows that combine planning, memory, and tool use. While this design enables...
2 months ago cs.AI cs.CL
PDF
Attack HIGH
Ahmad Mohammad Saber, Saeed Jafari, Zhengmao Ouyang +3 more
This paper presents a large language model (LLM)-based framework that adapts and fine-tunes compact LLMs for detecting cyberattacks on transformer...
2 months ago cs.CR cs.LG eess.SP
PDF
Attack HIGH
Iago Alves Brito, Walcy Santos Rezende Rios, Julia Soares Dollis +2 more
Current safety evaluations of large language models (LLMs) create a dangerous illusion of universality, aggregating "Identity Hate" into scalar...
2 months ago cs.CL cs.AI
PDF
Attack HIGH
Yu Yan, Sheng Sun, Mingfeng Li +6 more
Recently, people have suffered from LLM hallucination and have become increasingly aware of the reliability gap of LLMs in open and...
Attack HIGH
Siyuan Li, Xi Lin, Jun Wu +5 more
Jailbreak attacks pose significant threats to large language models (LLMs), enabling attackers to bypass safeguards. However, existing reactive...
2 months ago cs.CR cs.AI
PDF
Attack HIGH
Ji Guo, Wenbo Jiang, Yansong Lin +7 more
Vision-Language-Action (VLA) models are widely deployed in safety-critical embodied AI applications such as robotics. However, their complex...
2 months ago cs.CR cs.LG
PDF
Benchmark HIGH
Quy-Anh Dang, Chris Ngo, Truong-Son Hy
As large language models (LLMs) become integral to safety-critical applications, ensuring their robustness against adversarial prompts is paramount....
Attack HIGH
Hang Fu, Wanli Peng, Yinghan Zhou +3 more
The widespread adoption of Large Language Model (LLM) in commercial and research settings has intensified the need for robust intellectual property...
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial