AI Security Research
2,077+ academic papers on AI security, attacks, and defenses
Defense MEDIUM
Carlos Hinojosa, Clemens Grange, Bernard Ghanem
Vision-language models (VLMs) are increasingly deployed in real-world and embodied settings where safety decisions depend on visual context. However,...
5 days ago cs.CV cs.AI cs.CL
PDF
Attack LOW
Pranay Anchuri, Matteo Campanelli, Paul Cesaretti +4 more
When large AI models are deployed as cloud-based services, clients have no guarantee that responses are correct or were produced by the intended...
5 days ago cs.CR cs.LG
PDF
Benchmark MEDIUM
Zikang Ding, Junhao Li, Suling Wu +3 more
Model watermarking utilizes internal representations to protect the ownership of large language models (LLMs). However, these features inevitably...
6 days ago cs.CR cs.AI
PDF
Survey HIGH
Dimitris Mitropoulos, Nikolaos Alexopoulos, Georgios Alexopoulos +1 more
Security code reviews increasingly rely on systems integrating Large Language Models (LLMs), ranging from interactive assistants to autonomous agents...
6 days ago cs.SE cs.AI cs.CR
PDF
Attack HIGH
Mohammadhossein Homaei, Iman Khazrak, Rubén Molano +2 more
Industrial Cyber-Physical Systems (ICPS) face growing threats from cyber-attacks that exploit sensor and control vulnerabilities. Digital Twin (DT)...
6 days ago cs.CR cs.LG
PDF
Attack HIGH
Jiahao Zhang, Yilong Wang, Suhang Wang
Graph neural networks (GNNs) are widely used for learning from graph-structured data in domains such as social networks, recommender systems, and...
6 days ago cs.LG cs.CR
PDF
Tool HIGH
Md Takrim Ul Alam, Akif Islam, Mohd Ruhul Ameen +2 more
Large language models (LLMs) deployed behind APIs and retrieval-augmented generation (RAG) stacks are vulnerable to prompt injection attacks that may...
Benchmark LOW
Alvin Rajkomar, Pavan Sudarshan, Angela Lai +1 more
Background: Clinical trials rely on transparent inclusion criteria to ensure generalizability. In contrast, benchmarks validating health-related...
Survey MEDIUM
Saket Sanjeev Chaturvedi, Joshua Bergerson, Tanwi Mallick
As large language models (LLMs) evolve into autonomous "AI scientists," they promise transformative advances but introduce novel vulnerabilities,...
6 days ago cs.CR cs.CV
PDF
Attack MEDIUM
Xavier Cadet, Aditya Vikram Singh, Harsh Mamania +6 more
Investigating cybersecurity incidents requires collecting and analyzing evidence from multiple log sources, including intrusion detection alerts,...
6 days ago cs.CR cs.AI
PDF
Attack MEDIUM
Xavier Cadet, Aditya Vikram Singh, Harsh Mamania +6 more
Investigating cybersecurity incidents requires collecting and analyzing evidence from multiple log sources, including intrusion detection alerts,...
6 days ago cs.CR cs.AI
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial