Tool LOW
Cole Walsh, Rodica Ivan
Automated systems have been widely adopted across the educational testing industry for open-response assessment and essay scoring. These systems...
1 months ago cs.CL cs.AI cs.CY
PDF
Benchmark MEDIUM
Pei Chen, Geng Hong, Xinyi Wu +6 more
The emergence of Large Language Model-enhanced Search Engines (LLMSEs) has revolutionized information retrieval by integrating web-scale search...
1 months ago cs.CR cs.IR
PDF
Defense MEDIUM
Xunguang Wang, Yuguang Zhou, Qingyue Wang +5 more
Large language models (LLMs) increasingly rely on explicit chain-of-thought (CoT) reasoning to solve complex tasks, yet the safety of the reasoning...
1 months ago cs.AI cs.CR
PDF
Attack HIGH
Eyal Hadad, Mordechai Guri
On-device Vision-Language Models (VLMs) promise data privacy via local execution. However, we show that the architectural shift toward Dynamic...
1 months ago cs.CR cs.AI cs.LG
PDF
Attack HIGH
Younes Salmi, Hanna Bogucka
Deep learning (DL) has been widely studied for assisting applications of modern wireless communications. One of the applications is automatic...
Attack HIGH
Younes Salmi, Hanna Bogucka
Deep Learning (DL) has become a key technology that assists radio frequency (RF) signal classification applications, such as modulation...
Attack HIGH
Younes Salmi, Hanna Bogucka
This paper investigates the susceptibility to model integrity attacks that overload virtual machines assigned by the k-means algorithm used for...
1 months ago cs.CR cs.LG
PDF
Attack HIGH
Hieu Xuan Le, Benjamin Goh, Quy Anh Tang
Prompt attacks, including jailbreaks and prompt injections, pose a critical security risk to Large Language Model (LLM) systems. In production,...
Attack HIGH
Haozhen Wang, Haoyue Liu, Jionghao Zhu +3 more
Large Language Models (LLMs) have demonstrated remarkable performance across a wide range of applications. However, their practical deployment is...
1 months ago cs.CR cs.AI
PDF
Defense LOW
Cristian Lupascu, Alexandru Lupascu
Large Language Model based agents increasingly operate in high stakes, multi turn settings where factual grounding is critical, yet their memory...
Benchmark LOW
Zhihui Yao, Hengran Zhang, Keping Bi
Retrieval-Augmented Generation (RAG) enhances Large Language Models (LLMs) with external knowledge but remains vulnerable to low-authority sources...
Tool HIGH
Ron Litvak
System prompt configuration can make the difference between near-total phishing blindness and near-perfect detection in LLM email agents. We present...
1 months ago cs.CR cs.AI
PDF
Survey MEDIUM
Zhenyi Wang, Siyu Luan
As machine learning (ML) systems expand in both scale and functionality, the security landscape has become increasingly complex, with a proliferation...
1 months ago cs.CR cs.AI cs.CL
PDF
Attack MEDIUM
Ahmed Lekssays
Large Language Models (LLMs) face critical challenges when analyzing security vulnerabilities in real world codebases: token limits prevent loading...
Benchmark LOW
Francesco Gentile, Nicola Dall'Asen, Francesco Tonini +3 more
As vision-language models are deployed at scale, understanding their internal mechanisms becomes increasingly critical. Existing interpretability...
Defense MEDIUM
Yuxiao Li, Alina Fastowski, Efstratios Zaradoukas +2 more
Activation steering has emerged as a powerful tool to shape LLM behavior without the need for weight updates. While its inherent brittleness and...
1 months ago cs.CR cs.CL
PDF
Attack HIGH
Alexander Panfilov, Peter Romov, Igor Shilov +3 more
LLM agents like Claude Code can not only write code but also be used for autonomous AI research and engineering \citep{rank2026posttrainbench,...
1 months ago cs.LG cs.AI cs.CR
PDF
Attack HIGH
Joseph G. Zalameda, Megan A. Witherow, Alexander M. Glandon +2 more
Machine learning models trained on small data sets for security applications are especially vulnerable to adversarial attacks. Person identification...
1 months ago cs.LG cs.CR cs.CV
PDF
Benchmark MEDIUM
Michael Somma, Markus Großpointner, Paul Zabalegui +2 more
The increasing complexity and interconnectivity of digital infrastructures make scalable and reliable security assessment methods essential. Robotic...
1 months ago cs.RO cs.AI
PDF
Attack HIGH
Yulin Shen, Xudong Pan, Geng Hong +1 more
Recent advances in the Model Context Protocol (MCP) have enabled large language models (LLMs) to invoke external tools with unprecedented ease. This...
1 months ago cs.CR cs.AI
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial