Attack HIGH
Ruichao Liang, Jing Chen, Xianglong Li +5 more
Smart contract vulnerabilities in Decentralized Finance caused over billions of dollars losses every year, yet the security community faces a...
1 weeks ago cs.CR cs.SE
PDF
Attack HIGH
Mario Rodríguez Béjar, Francisco J. Cortés-Delgado, S. Braghin +1 more
Large language models (LLMs) remain vulnerable to jailbreak attacks that bypass safety alignment and elicit harmful responses. A growing body of work...
1 weeks ago cs.CL cs.CR
PDF
Attack HIGH
Arne Roszeitis, Bartosz Burgiel, Victor Jüttner +1 more
Smart devices, such as light bulbs, TVs, fridges, etc., equipped with computing capabilities and wireless communication, are part of everyday life in...
Attack HIGH
Adel ElZemity, Budi Arief, Shujun Li +6 more
Bare-metal operational technology (OT) devices -- especially the microcontrollers running Modbus/TCP and CoAP at the base of industrial control...
1 weeks ago cs.CR cs.AI
PDF
Attack HIGH
Ji Guo, Xiaolong Qin, Cencen Liu +3 more
Vision-Language Models (VLMs) have achieved remarkable success in tasks such as image captioning and visual question answering (VQA). However, as...
Attack HIGH
Mingyu Luo, Zihan Zhang, Zesen Liu +7 more
Bring-Your-Own-Key (BYOK) agent architectures let users route LLM traffic through third-party relays, creating a critical integrity gap: a malicious...
1 weeks ago cs.CR cs.AI
PDF
Attack HIGH
Yanting Wang, Chenlong Yin, Ying Chen +1 more
Long-context large language models (LLMs)-for example, Gemini-3.1-Pro and Qwen-3.5-are widely used to empower many real-world applications, such as...
Attack HIGH
Prashant Kulkarni
Multi-turn prompt injection follows a known attack path -- trust-building, pivoting, escalation but text-level defenses miss covert attacks where...
1 weeks ago cs.CR cs.AI
PDF
Attack HIGH
Bowen Sun, Chaozhuo Li, Yaodong Yang +2 more
Decompositional jailbreaks pose a critical threat to large language models (LLMs) by allowing adversaries to fragment a malicious objective into a...
1 weeks ago cs.CR cs.CL cs.LG
PDF
Attack HIGH
Zi Li, Tian Zhou, Wenze Li +3 more
Local fine-tuning datasets routinely contain sensitive secrets such as API keys, personal identifiers, and financial records. Although ''local...
1 weeks ago cs.CR cs.AI
PDF
Attack HIGH
Soheil Khodayari, Xuenan Zhang, Bhupendra Acharya +1 more
As LLMs are increasingly integrated into systems that browse, retrieve, summarize, and act on web content, webpages have become an untrusted input...
Attack HIGH
Benjamin Probst, Andreas Happe, Jürgen Cito
Recent research has demonstrated the potential of Large Language Models (LLMs) for autonomous penetration testing, particularly when using...
1 weeks ago cs.CR cs.AI
PDF
Attack HIGH
Shirin Alanova, Bogdan Minko, Sabrina Sadiekh +1 more
Safety mechanisms for large language models (LLMs) remain predominantly English-centric, creating systematic vulnerabilities in multilingual...
2 weeks ago cs.CL cs.AI
PDF
Attack HIGH
Mengyao Du, Han Fang, Haokai Ma +4 more
Web agents have emerged as an effective paradigm for automating interactions with complex web environments, yet remain vulnerable to prompt injection...
2 weeks ago cs.CR cs.AI
PDF
Attack HIGH
Miles Q. Li, Benjamin C. M. Fung, Boyang Li +2 more
Existing white-box jailbreak attacks against aligned LLMs typically append discrete adversarial suffixes to the user prompt, which visibly alters the...
Attack HIGH
Allen Jue
Learned index structures achieve high performance by modeling the cumulative distribution function (CDF) of keys, but this reliance on data...
2 weeks ago cs.CR cs.DB
PDF
Attack HIGH
Zonghao Ying, Haozheng Wang, Jiangfan Liu +5 more
Large Language Model (LLM) agents are increasingly used to automate complex workflows, but integrating untrusted external data with privileged...
Attack HIGH
Xinhe Wang, Katia Sycara, Yaqi Xie
Large (vision-)language models exhibit remarkable capability but remain highly susceptible to jailbreaking. Existing safety training approaches aim...
2 weeks ago cs.CR cs.AI cs.CL
PDF
Attack HIGH
Yu Cui, Ruiqing Yue, Hang Fu +6 more
With the wide adoption of personal AI assistants such as OpenClaw, privacy leakage in user interaction contexts with large language model (LLM)...
Attack HIGH
Naheed Rayhan, Sohely Jahan
Large language models (LLMs) are increasingly integrated into sensitive workflows, raising the stakes for adversarial robustness and safety. This...
2 weeks ago cs.CR cs.AI
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial