Attack HIGH
Zi Liang, Qingqing Ye, Xuan Liu +3 more
Synthetic data refers to artificial samples generated by models. While it has been validated to significantly enhance the performance of large...
7 months ago cs.CR cs.AI cs.CL
PDF
Attack HIGH
Javad Forough, Mohammad Maheri, Hamed Haddadi
Large Language Models (LLMs) are increasingly susceptible to jailbreak attacks, which are adversarial prompts that bypass alignment constraints and...
Attack MEDIUM
Jeongyeon Hwang, Sangdon Park, Jungseul Ok
Watermarking offers a promising solution for detecting LLM-generated content, yet its robustness under realistic query-free (black-box) evasion...
7 months ago cs.CR cs.AI
PDF
Attack HIGH
Aashnan Rahman, Abid Hasan, Sherajul Arifin +5 more
Federated learning (FL) enables privacy-preserving model training by keeping data decentralized. However, it remains vulnerable to label-flipping...
Attack HIGH
Roie Kazoom, Yuval Ratzabi, Etamar Rothstein +1 more
Adversarial robustness in structured data remains an underexplored frontier compared to vision and language domains. In this work, we introduce a...
7 months ago cs.LG cs.AI
PDF
Attack HIGH
Hwan Chang, Yonghyun Jun, Hwanhee Lee
The growing deployment of large language model (LLM) based agents that interact with external environments has created new attack surfaces for...
Attack MEDIUM
Xingyu Li, Juefei Pu, Yifan Wu +13 more
Open-source software projects are foundational to modern software ecosystems, with the Linux kernel standing out as a critical exemplar due to its...
7 months ago cs.CR cs.LG
PDF
Benchmark MEDIUM
Antreas Ioannou, Andreas Shiamishis, Nora Hollenstein +1 more
In an era dominated by Large Language Models (LLMs), understanding their capabilities and limitations, especially in high-stakes fields like law, is...
7 months ago cs.CL cs.AI cs.LG
PDF
Benchmark LOW
Pooneh Mousavi, Lovenya Jain, Mirco Ravanelli +1 more
Large Audio Language Models (LALMs) integrate audio encoders with pretrained Large Language Models to perform complex multimodal reasoning tasks....
7 months ago cs.LG eess.AS
PDF
Attack HIGH
Wonjun Lee, Haon Park, Doehyeon Lee +2 more
Along with the rapid advancement of numerous Text-to-Video (T2V) models, growing concerns have emerged regarding their safety risks. While recent...
7 months ago cs.CV cs.AI
PDF
Other LOW
Stina Sundstedt, Mattias Wingren, Susanne Hägglund +1 more
Preschool children with language vulnerabilities -- such as developmental language disorders or immigration related language challenges -- often...
7 months ago cs.RO cs.AI cs.HC
PDF
Benchmark MEDIUM
Nakyeong Yang, Dong-Kyum Kim, Jea Kwon +3 more
Large language models trained on web-scale data can memorize private or sensitive knowledge, raising significant privacy risks. Although some...
Benchmark MEDIUM
Haochen Gong, Chenxiao Li, Rui Chang +1 more
Large language model (LLM)-based computer-use agents represent a convergence of AI and OS capabilities, enabling natural language to control system-...
7 months ago cs.CR cs.AI cs.OS
PDF
Benchmark MEDIUM
Jiayu Ding, Xinpeng Liu, Zhiyi Pan +2 more
Lifting 2D open-vocabulary understanding into 3D Gaussian Splatting (3DGS) scenes is a critical challenge. However, mainstream methods suffer from...
7 months ago cs.CV cs.AI
PDF
Tool MEDIUM
Lukas Twist, Jie M. Zhang, Mark Harman +1 more
Large language models (LLMs) are increasingly used to generate code, yet they continue to hallucinate, often inventing non-existent libraries. Such...
7 months ago cs.SE cs.CL
PDF
Tool HIGH
Petar Radanliev
This study presents a structured approach to evaluating vulnerabilities within quantum cryptographic protocols, focusing on the BB84 quantum key...
7 months ago cs.CR cs.AI cs.NI
PDF
Attack MEDIUM
David Benfield, Stefano Coniglio, Phan Tu Vuong +1 more
Adversarial machine learning concerns situations in which learners face attacks from active adversaries. Such scenarios arise in applications such as...
7 months ago cs.LG cs.CR
PDF
Defense MEDIUM
Anton Korznikov, Andrey Galichin, Alexey Dontsov +3 more
Activation steering is a promising technique for controlling LLM behavior by adding semantically meaningful vectors directly into a model's hidden...
7 months ago cs.LG cs.AI
PDF
Attack HIGH
Aravindhan G, Yuvaraj Govindarajulu, Parin Shah
Recent studies have demonstrated the vulnerability of Automatic Speech Recognition systems to adversarial examples, which can deceive these systems...
7 months ago cs.SD cs.AI cs.CR
PDF
Attack HIGH
Yue Liu, Yanjie Zhao, Yunbo Lyu +3 more
Agentic AI coding editors driven by large language models have recently become more popular due to their ability to improve developer productivity...
7 months ago cs.CR cs.SE
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial