Defense LOW
Junchuan Zhao, Minh Duc Vu, Ye Wang
Neural codec language models enable high-quality discrete speech synthesis, yet their inference remains vulnerable to token-level artifacts and...
2 months ago cs.SD eess.AS
PDF
Defense MEDIUM
Trapoom Ukarapol, Nut Chukamphaeng, Kunat Pipatanakul +1 more
The safety evaluation of large language models (LLMs) remains largely centered on English, leaving non-English languages and culturally grounded...
Defense MEDIUM
Zeyu Zhang, Xiangxiang Dai, Ziyi Han +2 more
Large language models (LLMs) are typically governed by post-training alignment (e.g., RLHF or DPO), which yields a largely static policy during...
2 months ago cs.LG cs.AI
PDF
Defense LOW
Brandon Yee, Krishna Sharma
MoltBook is a large-scale multi-agent coordination environment where over 770,000 autonomous LLM agents interact without human participation,...
2 months ago cs.MA cs.AI cs.SI
PDF
Defense LOW
Sami Abuzakuk, Lucas Crijns, Anne-Marie Kermarrec +2 more
Infrastructure as code (IaC) tools automate cloud provisioning but verifying that deployed systems remain consistent with the IaC specifications...
2 months ago cs.SE cs.AI cs.MA
PDF
Defense LOW
Nancy Lau, Louis Sloot, Jyoutir Raj +6 more
Large language models (LLMs) are increasingly being deployed as software engineering agents that autonomously contribute to repositories. A major...
2 months ago cs.CR cs.AI
PDF
Defense MEDIUM
Manisha Mukherjee, Vincent J. Hellendoorn
Large Language Models (LLMs) are increasingly deployed for code generation in high-stakes software development, yet their limited transparency in...
2 months ago cs.SE cs.AI cs.CR
PDF
Defense MEDIUM
Ming Wen, Kun Yang, Xin Chen +4 more
Multimodal Large Language Models (MLLMs) pose critical safety challenges, as they are susceptible not only to adversarial attacks such as...
2 months ago cs.LG cs.AI
PDF
Defense MEDIUM
Chang Xue, Fang Liu, Jiaye Wang +2 more
Decentralized financial platforms rely heavily on Web of Trust reputation systems to mitigate counterparty risk in the absence of centralized...
2 months ago cs.CR cs.AI cs.LG
PDF
Defense LOW
Xingyu Zhu, Kesen Zhao, Liang Yi +4 more
Multimodal large language models (MLLMs) have achieved remarkable progress in vision-language reasoning, yet they remain vulnerable to hallucination,...
Defense LOW
Kunpeng Zhang, Dongwei Xiao, Daoyuan Wu +5 more
Deep learning (DL) libraries are widely used in critical applications, where even subtle silent bugs can lead to serious consequences. While existing...
Defense MEDIUM
Lan Zhang, Chengsi Liang, Zeming Zhuang +4 more
Semantic communication (SemCom) redefines wireless communication from reproducing symbols to transmitting task-relevant semantics. However, this...
2 months ago cs.CR eess.SY
PDF
Defense MEDIUM
Xuan Chen, Hao Liu, Tao Yuan +3 more
Traditional phishing website detection relies on static heuristics or reference lists, which lag behind rapidly evolving attacks. While recent...
Defense MEDIUM
Mengxuan Hu, Vivek V. Datla, Anoop Kumar +4 more
Recent advances in alignment techniques such as Supervised Fine-Tuning (SFT), Reinforcement Learning from Human Feedback (RLHF), and Direct...
2 months ago cs.CL cs.AI
PDF
Defense MEDIUM
Morteza Eskandarian, Mahdi Rabbani, Arun Kaniyamattam +6 more
The current generation of large language models produces sophisticated social-engineering content that bypasses standard text screening systems in...
Defense MEDIUM
Chun Yan Ryan Kan, Tommy Tran, Vedant Yadav +4 more
Defending LLMs against adversarial jailbreak attacks remains an open challenge. Existing defenses rely on binary classifiers that fail when...
2 months ago cs.CR cs.AI cs.CL
PDF
Defense LOW
Imgyeong Lee, Tayyib Ul Hassan, Abram Hindle
Artificial Intelligence (AI) increasingly automates various parts of the software development tasks. Although AI has enhanced the productivity of...
Defense MEDIUM
Zachary Coalson, Beth Sohler, Aiden Gabriel +1 more
We identify a structural weakness in current large language model (LLM) alignment: modern refusal mechanisms are fail-open. While existing approaches...
2 months ago cs.LG cs.CR
PDF
Defense MEDIUM
Sasha Behrouzi, Lichao Wu, Mohamadreza Rostami +1 more
Safety alignment is essential for the responsible deployment of large language models (LLMs). Yet, existing approaches often rely on heavyweight...
2 months ago cs.CR cs.LG
PDF
Defense LOW
Robert Ranisch, Sabine Salloch
The emergence of agentic AI marks a new phase in the digital transformation of healthcare. Distinct from conventional generative AI, agentic AI...
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial