Attack HIGH
Ziye Wang, Guanyu Wang, Kailong Wang
Retrieval-Augmented Generation (RAG) significantly enhances Large Language Models (LLMs), but simultaneously exposes a critical vulnerability to...
Defense MEDIUM
Nikolaos D. Tantaroudas, Ilias Karachalios, Andrew J. McCracken
The field of cybersecurity is confronted with two interrelated challenges: a worldwide deficit of qualified practitioners and ongoing human-factor...
1 months ago cs.CE cs.AI cs.CR
PDF
Attack HIGH
Yizhe Zeng, Wei Zhang, Yunpeng Li +3 more
While Chain-of-Thought (CoT) prompting has become a standard paradigm for eliciting complex reasoning capabilities in Large Language Models, it...
Defense LOW
Shunan Zhu, Jiawei Chen, Yonghao Yu +1 more
As high quality public data becomes scarce, Federated Learning (FL) provides a vital pathway to leverage valuable private user data while preserving...
1 months ago cs.CR cs.LG
PDF
Defense HIGH
Zi Liang, Qipeng Xie, Jun He +7 more
Recent advancements in Large Language Models (LLMs) have sparked interest in their application to Static Application Security Testing (SAST),...
1 months ago cs.CR cs.CL cs.SE
PDF
Benchmark HIGH
Phan The Duy, Nguyen Viet Duy, Khoa Ngo-Khanh +2 more
While recent approaches leverage large language models (LLMs) and multi-agent pipelines to automatically generate proof-of-concept (PoC) exploits...
Attack HIGH
Adrian Shuai Li, Md Ajwad Akil, Elisa Bertino
Concept drift and adversarial evasion are two major challenges for deploying machine learning-based malware detectors. While both have been studied...
Tool MEDIUM
Yinghan Hou, Zongyou Yang
OpenClaw's ClawHub marketplace hosts over 13,000 community-contributed agent skills, and between 13% and 26% of them contain security vulnerabilities...
1 months ago cs.CR cs.AI
PDF
Defense MEDIUM
Peigui Qi, Kunsheng Tang, Yanpu Yu +7 more
Vision-Language Models (VLMs) face significant safety vulnerabilities from malicious prompt attacks due to weakened alignment during visual...
Attack HIGH
Manish Bhatt, Sarthak Munshi, Vineeth Sai Narajala +4 more
We prove that no continuous, utility-preserving wrapper defense-a function $D: X\to X$ that preprocesses inputs before the model sees them-can make...
1 months ago cs.CR cs.AI
PDF
Other LOW
Peng Huang, Yiming Wang, Yineng Chen +9 more
Echocardiography plays an important role in the screening and diagnosis of cardiovascular diseases. However, automated intelligent analysis of...
Attack MEDIUM
Mutsumi Sasaki, Kouta Nakayama, Yusuke Miyao +2 more
When introducing Large Language Models (LLMs) into industrial applications, such as healthcare and education, the risk of generating harmful content...
Attack LOW
Changgeon Ko, Jisu Shin, Hoyun Song +3 more
Large language model (LLM) agents are increasingly acting as human delegates in multi-agent environments, where a representative agent integrates...
1 months ago cs.CL cs.AI cs.MA
PDF
Survey MEDIUM
Nirajan Acharya, Gaurav Kumar Gupta
The Model Context Protocol (MCP), introduced by Anthropic in November 2024 and now governed by the Linux Foundation's Agentic AI Foundation, has...
1 months ago cs.CR cs.AI
PDF
Attack MEDIUM
Xaver Fink, Borja Fernandez Adiego, Daniele Mirarchi +4 more
In this paper, we analyze and improve the adversarial robustness of a convolutional neural network (CNN) that assists crystal-collimator alignment at...
1 months ago cs.CR cs.LG
PDF
Attack HIGH
Zonghao Ying, Haowen Dai, Lianyu Hu +5 more
Modern text-to-image (T2I) models can now render legible, paragraph-length text, enabling a fundamentally new class of misuse. We identify and...
Attack HIGH
Zonghao Ying, Haowen Dai, Lianyu Hu +5 more
Modern text-to-image (T2I) models can now render legible, paragraph-length text, enabling a fundamentally new class of misuse. We identify and...
Defense MEDIUM
Igor Maljkovic, Maria Rosaria Briglia, Iacopo Masi +2 more
Vision-Language Models (VLMs) have become essential for tasks such as image synthesis, captioning, and retrieval by aligning textual and visual...
1 months ago cs.CR cs.AI cs.CV
PDF
Attack HIGH
Yiyang Zhang, Chaojian Yu, Ziming Hong +4 more
Multimodal pretrained models are vulnerable to backdoor attacks, yet most existing methods rely on visual or multimodal triggers, which are...
1 months ago cs.CR cs.LG
PDF
Survey MEDIUM
Jiaren Peng, Zeqin Li, Chang You +17 more
The rapid advancement of Large Language Models (LLMs) has created new opportunities for Automated Penetration Testing (AutoPT), spawning numerous...
1 months ago cs.CR cs.AI cs.SE
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial