Attack HIGH
Hoang Long Do, Nasrin Sohrabi, Muneeb Ul Hassan
Large language models (LLMs) have been widely adopted in modern software development lifecycles, where they are increasingly used to automate and...
Attack HIGH
Shutong Fan, Lan Zhang, Xiaoyong Yuan
Most adversarial threats in artificial intelligence target the computational behavior of models rather than the humans who rely on them. Yet modern...
Benchmark LOW
Bibhabasu Mandal, Sagnik Nandy
In sensitive applications involving relational datasets, protecting information about individual links from adversarial queries is of paramount...
3 months ago stat.ML cs.CR cs.LG
PDF
Attack HIGH
Xilong Wang, Yinuo Liu, Zhun Wang +2 more
Prompt injection attacks manipulate webpage content to cause web agents to execute attacker-specified tasks instead of the user's intended ones....
3 months ago cs.CR cs.AI cs.CL
PDF
Attack MEDIUM
Andrew Draganov, Tolga H. Dur, Anandmayi Bhongade +1 more
We present a data poisoning attack -- Phantom Transfer -- with the property that, even if you know precisely how the poison was placed into an...
3 months ago cs.CR cs.AI
PDF
Attack HIGH
Chen Xiong, Zhiyuan He, Pin-Yu Chen +2 more
Activation steering is a practical post-training model alignment technique to enhance the utility of Large Language Models (LLMs). Prior to deploying...
3 months ago cs.CR cs.AI
PDF
Benchmark MEDIUM
Omar Abdelnasser, Fatemah Alharbi, Khaled Khasawneh +2 more
Safety alignment in Language Models (LMs) is fundamental for trustworthy AI. However, while different stakeholders are trying to leverage Arabic...
3 months ago cs.CL cs.AI
PDF
Attack HIGH
Mengxuan Wang, Yuxin Chen, Gang Xu +3 more
Vision language models (VLMs) extend the reasoning capabilities of large language models (LLMs) to cross-modal settings, yet remain highly vulnerable...
3 months ago cs.AI cs.LG
PDF
Tool LOW
Jiaqi Gao, Zijian Zhang, Yuqiang Sun +5 more
Business logic vulnerabilities have become one of the most damaging yet least understood classes of smart contract vulnerabilities. Unlike...
Attack HIGH
Hicham Eddoubi, Umar Faruk Abdullahi, Fadi Hassan
Large Language Models (LLMs) have seen widespread adoption across multiple domains, creating an urgent need for robust safety alignment mechanisms....
Attack MEDIUM
Matthew P. Lad, Louisa Conwill, Megan Levis Scheirer
With the rapid growth of Large Language Models (LLMs), criticism of their societal impact has also grown. Work in Responsible AI (RAI) has focused on...
Benchmark HIGH
Hao Li, Ruoyao Wen, Shanghao Shi +2 more
AI agents that autonomously interact with external tools and environments show great promise across real-world applications. However, the external...
Attack LOW
Blake Bullwinkel, Giorgio Severi, Keegan Hines +3 more
Detecting whether a model has been poisoned is a longstanding problem in AI security. In this work, we present a practical scanner for identifying...
3 months ago cs.CR cs.AI
PDF
Attack HIGH
Xiaozuo Shen, Yifei Cai, Rui Ning +2 more
The widespread adoption of Vision Transformers (ViTs) elevates supply-chain risk on third-party model hubs, where an adversary can implant backdoors...
Defense MEDIUM
Sidahmed Benabderrahmane, Petko Valtchev, James Cheney +1 more
Detecting rare and diverse anomalies in highly imbalanced datasets-such as Advanced Persistent Threats (APTs) in cybersecurity-remains a fundamental...
3 months ago cs.LG cs.AI cs.CR
PDF
Benchmark MEDIUM
Tomer Kordonsky, Maayan Yamin, Noam Benzimra +2 more
LLMs are increasingly used for code generation, but their outputs often follow recurring templates that can induce predictable vulnerabilities. We...
3 months ago cs.CR cs.AI
PDF
Attack HIGH
Nirab Hossain, Pablo Moriano
Modern vehicles rely on electronic control units (ECUs) interconnected through the Controller Area Network (CAN), making in-vehicle communication a...
3 months ago cs.CR cs.AI cs.LG
PDF
Defense MEDIUM
Rohan Saxena
Fine-tuning language models on narrowly harmful data causes emergent misalignment (EM) -- behavioral failures extending far beyond training...
3 months ago cs.CL cs.AI
PDF
Attack MEDIUM
Patrick Cooper, Alireza Nadali, Ashutosh Trivedi +1 more
Large language models (LLMs) are known to exhibit brittle behavior under adversarial prompts and jailbreak attacks, even after extensive alignment...
3 months ago cs.CL cs.AI cs.CR
PDF
Benchmark MEDIUM
Najmul Hasan, Prashanth BusiReddyGari
The Uniform Resource Locator (URL), introduced in a connectivity-first era to define access and locate resources, remains historically limited,...
3 months ago cs.CR cs.AI
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial