Attack HIGH
Yuqi Jia, Ruiqi Wang, Xilong Wang +2 more
Prompt injection attacks insert malicious instructions into an LLM's input to steer it toward an attacker-chosen task instead of the intended one....
Attack HIGH
Ruomeng Ding, Yifei Pang, He Sun +3 more
Evaluation and alignment pipelines for large language models increasingly rely on LLM-based judges, whose behavior is guided by natural-language...
3 months ago cs.CR cs.AI cs.CL
PDF
Benchmark HIGH
Haoyu Li, Xijia Che, Yanhao Wang +2 more
Proof-of-Vulnerability (PoV) generation is a critical task in software security, serving as a cornerstone for vulnerability validation, false...
3 months ago cs.SE cs.CR
PDF
Attack HIGH
Weiming Song, Xuan Xie, Ruiping Yin
Large language models (LLMs) remain vulnerable to jailbreak prompts that elicit harmful or policy-violating outputs, while many existing defenses...
3 months ago cs.CR cs.AI
PDF
Benchmark MEDIUM
Mohamed Shaaban, Mohamed Elmahallawy
Federated learning (FL) enables collaborative training across organizational silos without sharing raw data, making it attractive for...
3 months ago cs.CR cs.CL
PDF
Other LOW
Peng Cheng, Jiucheng Zang, Qingnan Li +6 more
Muon-style optimizers leverage Newton-Schulz (NS) iterations to orthogonalize updates, yielding update geometries that often outperform Adam-series...
3 months ago cs.LG cs.AI
PDF
Attack MEDIUM
Akshat Naik, Jay Culligan, Yarin Gal +4 more
As Large Language Model (LLM) agents become more capable, their coordinated use in the form of multi-agent systems is anticipated to emerge as a...
Benchmark MEDIUM
Anudeep Das, Prach Chantasantitam, Gurjot Singh +3 more
Large language models (LLMs) are increasingly deployed in settings where inducing a bias toward a certain topic can have significant consequences,...
3 months ago cs.CR cs.AI
PDF
Benchmark MEDIUM
Xu Li, Simon Yu, Minzhou Pan +5 more
LLM-based agents are becoming increasingly capable, yet their safety lags behind. This creates a gap between what agents can do and should do. This...
3 months ago cs.CR cs.AI cs.CL
PDF
Attack MEDIUM
Yiran Gao, Kim Hammar, Tao Li
Rapidly evolving cyberattacks demand incident response systems that can autonomously learn and adapt to changing threats. Prior work has extensively...
3 months ago cs.CR cs.AI
PDF
Attack HIGH
Alfous Tim, Kuniyilh Simi D
The Internet of Things (IoT) systems increasingly depend on continual learning to adapt to non-stationary environments. These environments can...
3 months ago cs.LG cs.CR cs.NI
PDF
Defense MEDIUM
George Alexandru Adam, Alexander Cui, Edwin Thomas +7 more
While historical considerations surrounding text authenticity revolved primarily around plagiarism, the advent of large language models (LLMs) has...
Attack HIGH
Osama Zafar, Shaojie Zhan, Tianxi Ji +1 more
In recent years, the widespread adoption of Machine Learning as a Service (MLaaS), particularly in sensitive environments, has raised considerable...
Benchmark MEDIUM
Tailia Malloy, Tegawende F. Bissyande
Large Language Models are expanding beyond being a tool humans use and into independent agents that can observe an environment, reason about...
3 months ago cs.CR cs.AI
PDF
Benchmark MEDIUM
Nataša Krčo, Zexi Yao, Matthieu Meeus +1 more
Data containing personal information is increasingly used to train, fine-tune, or query Large Language Models (LLMs). Text is typically scrubbed of...
3 months ago cs.CL cs.AI cs.CR
PDF
Defense LOW
Jiyong Uhm, Minseok Kim, Michalis Polychronakis +1 more
Binary code analysis plays an essential role in cybersecurity, facilitating reverse engineering to reveal the inner workings of programs in the...
3 months ago cs.CR cs.LG
PDF
Attack MEDIUM
Oguzhan Baser, Elahe Sadeghi, Eric Wang +5 more
Most large language models (LLMs) run on external clouds: users send a prompt, pay for inference, and must trust that the remote GPU executes the LLM...
3 months ago cs.CR cs.AI
PDF
Benchmark LOW
Rosie Zhao, Anshul Shah, Xiaoyu Zhu +5 more
Reinforcement learning (RL) fine-tuning has become a key technique for enhancing large language models (LLMs) on reasoning-intensive tasks,...
Benchmark HIGH
André Storhaug, Jiamou Sun, Jingyue Li
Identifying vulnerability-fixing commits corresponding to disclosed CVEs is essential for secure software maintenance but remains challenging at...
3 months ago cs.SE cs.AI cs.CR
PDF
Survey LOW
Renjun Xu, Yang Yan
The transition from monolithic language models to modular, skill-equipped agents marks a defining shift in how large language models (LLMs) are...
3 months ago cs.MA cs.AI
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial