Attack HIGH
Osama Zafar, Shaojie Zhan, Tianxi Ji +1 more
In recent years, the widespread adoption of Machine Learning as a Service (MLaaS), particularly in sensitive environments, has raised considerable...
Benchmark MEDIUM
Tailia Malloy, Tegawende F. Bissyande
Large Language Models are expanding beyond being a tool humans use and into independent agents that can observe an environment, reason about...
1 months ago cs.CR cs.AI
PDF
Benchmark MEDIUM
Nataša Krčo, Zexi Yao, Matthieu Meeus +1 more
Data containing personal information is increasingly used to train, fine-tune, or query Large Language Models (LLMs). Text is typically scrubbed of...
1 months ago cs.CL cs.AI cs.CR
PDF
Defense LOW
Jiyong Uhm, Minseok Kim, Michalis Polychronakis +1 more
Binary code analysis plays an essential role in cybersecurity, facilitating reverse engineering to reveal the inner workings of programs in the...
1 months ago cs.CR cs.LG
PDF
Attack MEDIUM
Oguzhan Baser, Elahe Sadeghi, Eric Wang +5 more
Most large language models (LLMs) run on external clouds: users send a prompt, pay for inference, and must trust that the remote GPU executes the LLM...
1 months ago cs.CR cs.AI
PDF
Benchmark LOW
Rosie Zhao, Anshul Shah, Xiaoyu Zhu +5 more
Reinforcement learning (RL) fine-tuning has become a key technique for enhancing large language models (LLMs) on reasoning-intensive tasks,...
Benchmark HIGH
André Storhaug, Jiamou Sun, Jingyue Li
Identifying vulnerability-fixing commits corresponding to disclosed CVEs is essential for secure software maintenance but remains challenging at...
1 months ago cs.SE cs.AI cs.CR
PDF
Survey LOW
Renjun Xu, Yang Yan
The transition from monolithic language models to modular, skill-equipped agents marks a defining shift in how large language models (LLMs) are...
1 months ago cs.MA cs.AI
PDF
Attack HIGH
Yannick Assogba, Jacopo Cortellazzi, Javier Abad +3 more
Jailbreak attacks remain a persistent threat to large language model safety. We propose Context-Conditioned Delta Steering (CC-Delta), an SAE-based...
1 months ago cs.CR cs.CL cs.LG
PDF
Other HIGH
Nate Rahn, Allison Qi, Avery Griffin +3 more
We want language model assistants to conform to a character specification, which asserts how the model should act across diverse user interactions....
Tool HIGH
Yuepeng Hu, Yuqi Jia, Mengyuan Li +2 more
In a malicious tool attack, an attacker uploads a malicious tool to a distribution platform; once a user installs the tool and the LLM agent selects...
Defense MEDIUM
Zhaoxin Wang, Jiaming Liang, Fengbin Zhu +5 more
Large language models (LLMs) and multimodal LLMs are typically safety-aligned before release to prevent harmful content generation. However, recent...
Defense MEDIUM
Yujun Zhou, Yue Huang, Han Bao +8 more
While most AI alignment research focuses on preventing models from generating explicitly harmful content, a more subtle risk is emerging:...
1 months ago cs.LG cs.CL
PDF
Survey MEDIUM
Varpu Vehomäki, Kimmo K. Kaski
Understanding cyber security is increasingly important for individuals and organizations. However, a lot of information related to cyber security can...
Defense MEDIUM
Christian Rondanini, Barbara Carminati, Elena Ferrari +2 more
The proliferation of edge devices has created an urgent need for security solutions capable of detecting malware in real time while operating under...
1 months ago cs.CR cs.AI cs.DC
PDF
Attack HIGH
Dong Yan, Jian Liang, Ran He +1 more
Recent studies have shown that large language models (LLMs) can infer private user attributes (e.g., age, location, gender) from user-generated text...
1 months ago cs.CR cs.AI cs.CL
PDF
Benchmark MEDIUM
Faouzi El Yagoubi, Ranwa Al Mallah, Godwin Badu-Marfo
Multi-agent Large Language Model (LLM) systems create privacy risks that current benchmarks cannot measure. When agents coordinate on tasks,...
Attack HIGH
Sri Durga Sai Sowmya Kadali, Evangelos E. Papalexakis
Jailbreaking large language models (LLMs) has emerged as a critical security challenge with the widespread deployment of conversational AI systems....
1 months ago cs.CR cs.CL
PDF
Defense MEDIUM
Md Sazedur Rahman, Mizanur Rahman Jewel, Sanjay Madria
Mining is rapidly evolving into an AI driven cyber physical ecosystem where safety and operational reliability depend on robust perception,...
1 months ago cs.CR cs.DC
PDF
Benchmark MEDIUM
Aashish Kolluri, Rishi Sharma, Manuel Costa +5 more
Indirect prompt injection attacks threaten AI agents that execute consequential actions, motivating deterministic system-level defenses. Such...
1 months ago cs.CR cs.LG
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial