Attack HIGH
Ziyou Jiang, Lin Shi, Guowei Yang +3 more
Cyber attacks have become a serious threat to the security of software systems. Many organizations have built their security knowledge bases to...
Attack HIGH
Yunbei Zhang, Yingqiang Ge, Weijie Xu +3 more
Current multimodal red teaming treats images as wrappers for malicious payloads via typography or adversarial noise. These attacks are structurally...
3 months ago cs.CR cs.CV cs.LG
PDF
Attack HIGH
Ethan Rathbun, Wo Wei Lin, Alina Oprea +1 more
Simulated environments are a key piece in the success of Reinforcement Learning (RL), allowing practitioners and researchers to train decision making...
3 months ago cs.CR cs.LG cs.RO
PDF
Attack HIGH
Jafar Isbarov, Murat Kantarcioglu
As AI agents automate critical workloads, they remain vulnerable to indirect prompt injection (IPI) attacks. Current defenses rely on monitoring...
3 months ago cs.CR cs.AI
PDF
Attack MEDIUM
Vishruti Kakkad, Paul Chung, Hanan Hibshi +1 more
An exponential growth of Machine Learning and its Generative AI applications brings with it significant security challenges, often referred to as...
3 months ago cs.CR cs.AI
PDF
Attack MEDIUM
Yike Sun, Haotong Yang, Zhouchen Lin +1 more
Tokenization is fundamental to how language models represent and process text, yet the behavior of widely used BPE tokenizers has received far less...
Attack MEDIUM
Ariel Fogel, Omer Hofman, Eilon Cohen +1 more
Open-weight language models are increasingly used in production settings, raising new security challenges. One prominent threat in this context is...
3 months ago cs.CR cs.LG
PDF
Attack MEDIUM
Leo Schwinn, Moritz Ladenburger, Tim Beyer +3 more
Automated \enquote{LLM-as-a-Judge} frameworks have become the de facto standard for scalable evaluation across natural language processing. For...
3 months ago cs.CL cs.AI
PDF
Attack HIGH
Joachim Schaeffer, Arjun Khandelwal, Tyler Tracy
Future AI deployments will likely be monitored for malicious behaviour. The ability of these AIs to subvert monitors by adversarially selecting...
3 months ago cs.CR cs.AI
PDF
Attack HIGH
Jaehyun Kwak, Nam Cao, Boryeong Cho +3 more
Adversarial attacks against Large Vision-Language Models (LVLMs) are crucial for exposing safety vulnerabilities in modern multimodal systems. Recent...
Attack HIGH
Yanshu Wang, Shuaishuai Yang, Jingjing He +1 more
Large Language Models (LLMs) face increasing threats from jailbreak attacks that bypass safety alignment. While prompt-based defenses such as...
3 months ago cs.CL cs.AI cs.CR
PDF
Attack MEDIUM
Youngji Roh, Hyunjin Cho, Jaehyung Kim
Large Language Models (LLMs) exhibit highly anisotropic internal representations, often characterized by massive activations, a phenomenon where a...
Attack MEDIUM
Zeming Wei, Qiaosheng Zhang, Xia Hu +1 more
Large Reasoning Models (LRMs) have achieved tremendous success with their chain-of-thought (CoT) reasoning, yet also face safety issues similar to...
3 months ago cs.LG cs.AI cs.CL
PDF
Attack HIGH
Derin Gezgin, Amartya Das, Shinhae Kim +3 more
Recently Large Language Models (LLMs) have been used in security vulnerability detection tasks including generating proof-of-concept (PoC) exploits....
Attack HIGH
Hoang Long Do, Nasrin Sohrabi, Muneeb Ul Hassan
Large language models (LLMs) have been widely adopted in modern software development lifecycles, where they are increasingly used to automate and...
Attack HIGH
Shutong Fan, Lan Zhang, Xiaoyong Yuan
Most adversarial threats in artificial intelligence target the computational behavior of models rather than the humans who rely on them. Yet modern...
Attack HIGH
Xilong Wang, Yinuo Liu, Zhun Wang +2 more
Prompt injection attacks manipulate webpage content to cause web agents to execute attacker-specified tasks instead of the user's intended ones....
3 months ago cs.CR cs.AI cs.CL
PDF
Attack MEDIUM
Andrew Draganov, Tolga H. Dur, Anandmayi Bhongade +1 more
We present a data poisoning attack -- Phantom Transfer -- with the property that, even if you know precisely how the poison was placed into an...
3 months ago cs.CR cs.AI
PDF
Attack HIGH
Chen Xiong, Zhiyuan He, Pin-Yu Chen +2 more
Activation steering is a practical post-training model alignment technique to enhance the utility of Large Language Models (LLMs). Prior to deploying...
3 months ago cs.CR cs.AI
PDF
Attack HIGH
Mengxuan Wang, Yuxin Chen, Gang Xu +3 more
Vision language models (VLMs) extend the reasoning capabilities of large language models (LLMs) to cross-modal settings, yet remain highly vulnerable...
3 months ago cs.AI cs.LG
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial