Tool HIGH
Doron Shavit
Jailbreak prompts are a practical and evolving threat to large language models (LLMs), particularly in agentic systems that execute tools over...
1 months ago cs.CR cs.AI
PDF
Attack HIGH
Yiwen Lu
Federated Learning (FL) enables collaborative model training without exposing clients' private data, and has been widely adopted in privacy-sensitive...
1 months ago cs.CR cs.DC
PDF
Attack HIGH
Yu Yin, Shuai Wang, Bevan Koopman +1 more
Large Language Models (LLMs) have emerged as powerful re-rankers. Recent research has however showed that simple prompt injections embedded within a...
Survey HIGH
Scott Thornton
AI-assisted code review is widely used to detect vulnerabilities before production release. Prior work shows that adversarial prompt manipulation can...
1 months ago cs.CR cs.AI cs.LG
PDF
Attack HIGH
Xianglin Yang, Yufei He, Shuo Ji +2 more
Self-evolving LLM agents update their internal state across sessions, often by writing and reusing long-term memory. This design improves performance...
1 months ago cs.CR cs.AI
PDF
Attack HIGH
Mitchell Piehl, Zhaohan Xi, Zuobin Xiong +2 more
Large language models (LLMs) are increasingly augmented with long-term memory systems to overcome finite context windows and enable persistent...
Attack HIGH
Xander Davies, Giorgi Giglemiani, Edmund Lau +3 more
Frontier LLMs are safeguarded against attempts to extract harmful information via adversarial prompts known as "jailbreaks". Recently, defenders have...
Attack HIGH
Lukas Struppek, Adam Gleave, Kellin Pelrine
As the capabilities of large language models continue to advance, so does their potential for misuse. While closed-source models typically rely on...
1 months ago cs.CR cs.AI cs.CL
PDF
Attack HIGH
In Chong Choi, Jiacheng Zhang, Feng Liu +1 more
Multi-turn jailbreak attacks are effective against text-only large language models (LLMs) by gradually introducing malicious content across turns....
Attack HIGH
Xiaojun Jia, Jie Liao, Simeng Qin +5 more
Agent skills are becoming a core abstraction in coding agents, packaging long-form instructions and auxiliary scripts to extend tool-augmented...
1 months ago cs.CR cs.AI
PDF
Attack HIGH
Yuqi Jia, Ruiqi Wang, Xilong Wang +2 more
Prompt injection attacks insert malicious instructions into an LLM's input to steer it toward an attacker-chosen task instead of the intended one....
Attack HIGH
Ruomeng Ding, Yifei Pang, He Sun +3 more
Evaluation and alignment pipelines for large language models increasingly rely on LLM-based judges, whose behavior is guided by natural-language...
1 months ago cs.CR cs.AI cs.CL
PDF
Benchmark HIGH
Haoyu Li, Xijia Che, Yanhao Wang +2 more
Proof-of-Vulnerability (PoV) generation is a critical task in software security, serving as a cornerstone for vulnerability validation, false...
1 months ago cs.SE cs.CR
PDF
Attack HIGH
Weiming Song, Xuan Xie, Ruiping Yin
Large language models (LLMs) remain vulnerable to jailbreak prompts that elicit harmful or policy-violating outputs, while many existing defenses...
1 months ago cs.CR cs.AI
PDF
Attack HIGH
Alfous Tim, Kuniyilh Simi D
The Internet of Things (IoT) systems increasingly depend on continual learning to adapt to non-stationary environments. These environments can...
1 months ago cs.LG cs.CR cs.NI
PDF
Attack HIGH
Osama Zafar, Shaojie Zhan, Tianxi Ji +1 more
In recent years, the widespread adoption of Machine Learning as a Service (MLaaS), particularly in sensitive environments, has raised considerable...
Benchmark HIGH
André Storhaug, Jiamou Sun, Jingyue Li
Identifying vulnerability-fixing commits corresponding to disclosed CVEs is essential for secure software maintenance but remains challenging at...
1 months ago cs.SE cs.AI cs.CR
PDF
Attack HIGH
Yannick Assogba, Jacopo Cortellazzi, Javier Abad +3 more
Jailbreak attacks remain a persistent threat to large language model safety. We propose Context-Conditioned Delta Steering (CC-Delta), an SAE-based...
1 months ago cs.CR cs.CL cs.LG
PDF
Other HIGH
Nate Rahn, Allison Qi, Avery Griffin +3 more
We want language model assistants to conform to a character specification, which asserts how the model should act across diverse user interactions....
Tool HIGH
Yuepeng Hu, Yuqi Jia, Mengyuan Li +2 more
In a malicious tool attack, an attacker uploads a malicious tool to a distribution platform; once a user installs the tool and the LLM agent selects...
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial