Attack HIGH
Yizhe Zeng, Wei Zhang, Yunpeng Li +3 more
While Chain-of-Thought (CoT) prompting has become a standard paradigm for eliciting complex reasoning capabilities in Large Language Models, it...
Attack HIGH
Adrian Shuai Li, Md Ajwad Akil, Elisa Bertino
Concept drift and adversarial evasion are two major challenges for deploying machine learning-based malware detectors. While both have been studied...
Attack HIGH
Manish Bhatt, Sarthak Munshi, Vineeth Sai Narajala +4 more
We prove that no continuous, utility-preserving wrapper defense-a function $D: X\to X$ that preprocesses inputs before the model sees them-can make...
1 months ago cs.CR cs.AI
PDF
Attack MEDIUM
Mutsumi Sasaki, Kouta Nakayama, Yusuke Miyao +2 more
When introducing Large Language Models (LLMs) into industrial applications, such as healthcare and education, the risk of generating harmful content...
Attack LOW
Changgeon Ko, Jisu Shin, Hoyun Song +3 more
Large language model (LLM) agents are increasingly acting as human delegates in multi-agent environments, where a representative agent integrates...
1 months ago cs.CL cs.AI cs.MA
PDF
Attack MEDIUM
Xaver Fink, Borja Fernandez Adiego, Daniele Mirarchi +4 more
In this paper, we analyze and improve the adversarial robustness of a convolutional neural network (CNN) that assists crystal-collimator alignment at...
1 months ago cs.CR cs.LG
PDF
Attack HIGH
Zonghao Ying, Haowen Dai, Lianyu Hu +5 more
Modern text-to-image (T2I) models can now render legible, paragraph-length text, enabling a fundamentally new class of misuse. We identify and...
Attack HIGH
Zonghao Ying, Haowen Dai, Lianyu Hu +5 more
Modern text-to-image (T2I) models can now render legible, paragraph-length text, enabling a fundamentally new class of misuse. We identify and...
Attack HIGH
Yiyang Zhang, Chaojian Yu, Ziming Hong +4 more
Multimodal pretrained models are vulnerable to backdoor attacks, yet most existing methods rely on visual or multimodal triggers, which are...
1 months ago cs.CR cs.LG
PDF
Attack HIGH
Qingyang Xu, Yaling Shen, Stephanie Fong +7 more
The increasing use of large language models (LLMs) in mental healthcare raises safety concerns in high-stakes therapeutic interactions. A key...
Attack MEDIUM
Vinod Vaikuntanathan, Or Zamir
AI agents are increasingly deployed to interact with other agents on behalf of users and organizations. We ask whether two such agents, operated by...
1 months ago cs.CR cs.AI cs.LG
PDF
Attack MEDIUM
Qiqing Huang, Xingyu Wang, Wanda Guo +2 more
Modern 5G user equipment (UE) processes Radio Resource Control (RRC) configuration messages during early control-plane exchanges, before...
Attack MEDIUM
Aobo Chen, Chenxu Zhao, Chenglin Miao +1 more
Large language models (LLMs) possess strong semantic understanding, driving significant progress in data mining applications. This is further...
1 months ago cs.LG cs.CR
PDF
Attack HIGH
Siyuan Li, Zehao Liu, Xi Lin +6 more
As Large Language Models (LLMs) are increasingly deployed in complex applications, their vulnerability to adversarial attacks raises urgent safety...
1 months ago cs.CR cs.AI
PDF
Attack MEDIUM
Vickson Ferrel
As TLS 1.3 encryption limits traditional Deep Packet Inspection (DPI), the security community has pivoted to Euclidean Transformer-based classifiers...
1 months ago cs.CR cs.LG
PDF
Attack HIGH
Zikai Zhang, Rui Hu, Olivera Kotevska +1 more
Large Language Models (LLMs) are powerful tools for answering user queries, yet they remain highly vulnerable to jailbreak attacks. Existing...
1 months ago cs.CR cs.AI
PDF
Attack HIGH
Tiankai Yang, Jiate Li, Yi Nian +5 more
LLM-based agents increasingly operate across repeated sessions, maintaining task states to ensure continuity. In many deployments, a single agent...
1 months ago cs.CL cs.AI cs.CR
PDF
Attack HIGH
Yanting Wang, Wei Zou, Runpeng Geng +1 more
Large language models (LLMs) and their applications, such as agents, are highly vulnerable to prompt injection attacks. State-of-the-art prompt...
Attack HIGH
Jiaqing Li, Zhibo Zhang, Shide Zhou +3 more
Model merging has emerged as a powerful technique for combining specialized capabilities from multiple fine-tuned LLMs without additional training...
Attack MEDIUM
Quanyan Zhu, Zhengye Han
This paper introduces a performative scenario optimization framework for decision-dependent chance-constrained problems. Unlike classical stochastic...
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial