Attack MEDIUM
Pavel Chizhov, Egor Bogomolov, Ivan P. Yamshchikov
Efficiency and safety of Large Language Models (LLMs), among other factors, rely on the quality of tokenization. A good tokenizer not only improves...
Benchmark MEDIUM
Djiré Albérick Euraste, Kaboré Abdoul Kader, Jordan Samhi +3 more
The lack of transparency about code datasets used to train large language models (LLMs) makes it difficult to detect, evaluate, and mitigate data...
Survey MEDIUM
Yi Ting Shen, Kentaroh Toyoda, Alex Leung
The rapid proliferation of Model Context Protocol (MCP)-based agentic systems has introduced a new category of security threats that existing...
3 weeks ago cs.CR cs.AI
PDF
Benchmark MEDIUM
Xixun Lin, Yang Liu, Yancheng Chen +9 more
The performance of large language model (LLM) agents depends critically on the execution harness, the system layer that orchestrates tool use,...
3 weeks ago cs.CR cs.AI
PDF
Defense MEDIUM
Xiaohua Wang, Muzhao Tian, Yuqi Zeng +20 more
Reinforcement Learning from Human Feedback (RLHF) and related alignment paradigms have become central to steering large language models (LLMs) and...
Defense MEDIUM
Sujan Ghimire, Parsa Mirfasihi, Muhtasim Alam Chowdhury +6 more
The globalization of integrated circuit (IC) design and manufacturing has increased the exposure of hardware intellectual property (IP) to untrusted...
Benchmark MEDIUM
Prajas Wadekar, Venkata Sai Pranav Bachina, Kunal Bhosikar +2 more
3D Gaussian Splatting (3DGS) has recently enabled highly photorealistic 3D reconstruction from casually captured multi-view images. However, this...
4 weeks ago cs.CV cs.CR cs.LG
PDF
Benchmark MEDIUM
Joel Fokou
Autonomous AI agents are rapidly transitioning from experimental tools to operational infrastructure, with projections that 80% of enterprise...
4 weeks ago cs.CR cs.AI
PDF
Attack MEDIUM
Shaopeng Fu, Di Wang
Adversarial training (AT) is an effective defense for large language models (LLMs) against jailbreak attacks, but performing AT on LLMs is costly. To...
4 weeks ago cs.LG cs.CR stat.ML
PDF
Attack MEDIUM
Anasuya Chattopadhyay, Daniel Reti, Hans D. Schotten
Cloud networks increasingly rely on machine learning based Network Intrusion Detection Systems to defend against evolving cyber threats. However,...
4 weeks ago cs.LG cs.CR
PDF
Attack MEDIUM
Vladimir A. Mazin, Mikhail A. Zorin, Dmitrii S. Korzh +3 more
Passwords still remain a dominant authentication method, yet their security is routinely subverted by predictable user choices and large-scale...
4 weeks ago cs.CR cs.AI
PDF
Benchmark MEDIUM
Miit Daga, Swarna Priya Ramu
Organisations increasingly outsource privacy-sensitive data transformations to cloud providers, yet no practical mechanism lets the data owner verify...
4 weeks ago cs.CR cs.DB cs.LG
PDF
Benchmark MEDIUM
Rui Yin, Tianxu Han, Naen Xu +8 more
Safety-aligned large language models (LLMs) are increasingly deployed in real-world pipelines, yet this deployment also enlarges the supply-chain...
4 weeks ago cs.CR cs.CL
PDF
Benchmark MEDIUM
Pei-Yu Tseng, Lan Zhang, ZihDwo Yeh +3 more
Cyber Threat Intelligence (CTI) reports contain Indicators of Compromise (IOCs) that are critical for security operations. To operationalize these...
Tool MEDIUM
Shangkun Che, Silin Du, Ge Gao
The widespread use of Large Language Models (LLMs) in text generation has raised increasing concerns about intellectual property disputes....
4 weeks ago cs.CR cs.CL
PDF
Attack MEDIUM
Hongru Song, Yu-An Liu, Ruqing Zhang +4 more
Retrieval-augmented generation (RAG) enhances large language model (LLM) reasoning by retrieving external documents, but also opens up new attack...
Attack MEDIUM
Anes Abdennebi, Nadjia Kara, Laaziz Lahlou
The applications of Generative Artificial Intelligence (GenAI) and their intersections with data-driven fields, such as healthcare, finance,...
4 weeks ago cs.CR cs.AI
PDF
Defense MEDIUM
Willy Carlos Tchuitcheu, Tan Lu, Ann Dooms
Historical approaches to Table Representation Learning (TRL) have largely adopted the sequential paradigms of Natural Language Processing (NLP). We...
Defense MEDIUM
Adam Stein, Davis Brown, Hamed Hassani +2 more
To identify safety violations, auditors often search over large sets of agent traces. This search is difficult because failures are often rare,...
4 weeks ago cs.AI cs.CL
PDF
Benchmark MEDIUM
Ricardo Bessa, Rui Claro, João Trindade +1 more
Large Language Models (LLMs) are redefining offensive cybersecurity by allowing the generation of harmful machine code with minimal human...
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial