Benchmark LOW
Chengwei Wei, Jung-jae Kim, Longyin Zhang +2 more
Large Language Models (LLMs) with extended reasoning capabilities often generate verbose and redundant reasoning traces, incurring unnecessary...
1 weeks ago cs.AI cs.CL
PDF
Survey MEDIUM
Yi Ting Shen, Kentaroh Toyoda, Alex Leung
The Model Context Protocol (MCP) introduces a structurally distinct attack surface that existing threat frameworks, designed for traditional software...
1 weeks ago cs.CR cs.AI
PDF
Survey MEDIUM
Abhijeet Sahu, Shuva Paul, Richard Macwan
Cyber deception assists in increasing the attacker's budget in reconnaissance or any early phases of threat intrusions. In the past, numerous methods...
1 weeks ago cs.CR cs.ET
PDF
Attack HIGH
Hammad Atta, Ken Huang, Kyriakos Rock Lambros +11 more
Agentic LLM systems equipped with persistent memory, RAG pipelines, and external tool connectors face a class of attacks - Logic-layer Prompt Control...
Attack MEDIUM
Patrick Levi
Retrieval augmented generation systems have become an integral part of everyday life. Whether in internet search engines, email systems, or service...
1 weeks ago cs.CR cs.AI
PDF
Attack HIGH
Shenao Yan, Shimaa Ahmed, Shan Jin +4 more
Code generation large language models (LLMs) are increasingly integrated into modern software development workflows. Recent work has shown that these...
1 weeks ago cs.CR cs.AI cs.SE
PDF
Tool MEDIUM
Taiwo Onitiju, Iman Vakilinia
Large Language Models increasingly power critical infrastructure from healthcare to finance, yet their vulnerability to adversarial manipulation...
1 weeks ago cs.CR cs.AI
PDF
Attack MEDIUM
Kushankur Ghosh, Mehar Klair, Kian Kyars +2 more
Provenance graphs model causal system-level interactions from logs, enabling anomaly detectors to learn normal behavior and detect deviations as...
1 weeks ago cs.CR cs.LG
PDF
Benchmark LOW
Min Zeng, Shuang Zhou, Zaifu Zhan +1 more
Medical language models must be updated as evidence and terminology evolve, yet sequential updating can trigger catastrophic forgetting. Although...
Benchmark MEDIUM
Caglar Yildirim
Large language models (LLMs) are increasingly deployed as tool-using agents, shifting safety concerns from harmful text generation to harmful task...
Attack HIGH
Yong Zou, Haoran Li, Fanxiao Li +5 more
Recent progress in image generation models (IGMs) enables high-fidelity content creation but also amplifies risks, including the reproduction of...
1 weeks ago cs.CV cs.AI cs.CR
PDF
Attack HIGH
Guangsheng Zhang, Huan Tian, Leo Zhang +4 more
Semantic segmentation models are widely deployed in safety-critical applications such as autonomous driving, yet their vulnerability to backdoor...
Attack HIGH
Deng Liu, Song Chen
Hardware faults, specifically bit-flips in quantized weights, pose a severe reliability threat to Large Language Models (LLMs), often triggering...
Benchmark MEDIUM
Gengxin Sun, Ruihao Yu, Liangyi Yin +3 more
Ensuring robust and fair interview assessment remains a key challenge in AI-driven evaluation. This paper presents CoMAI, a general-purpose...
1 weeks ago cs.MA cs.AI
PDF
Attack HIGH
Xiaobing Sun, Perry Lam, Shaohua Li +4 more
Modern LLMs employ safety mechanisms that extend beyond surface-level input filtering to latent semantic representations and generation-time...
Tool MEDIUM
Zhouwei Zhai, Mengxiang Chen, Anmeng Zhang
Large language models offer transformative potential for e-commerce search by enabling intent-aware recommendations. However, their industrial...
Tool LOW
Cosimo Spera
Customer service automation is undergoing a structural transformation. The dominant paradigm is shifting from scripted chatbots and single-agent...
Attack MEDIUM
Amira Guesmi, Muhammad Shafique
Vision-language models (VLMs) have recently shown remarkable capabilities in visual understanding and generation, but remain vulnerable to...
1 weeks ago cs.CR cs.CV
PDF
Defense LOW
Roberto Morabito, Mallik Tatipamula
The Internet has evolved by progressively expanding what humanity connects: first computers, then people, and later billions of devices through the...
1 weeks ago cs.NI cs.AI
PDF
Defense MEDIUM
Ce Zhang, Jinxi He, Junyi He +2 more
Multi-modal Large Language Models (MLLMs) have achieved remarkable performance across a wide range of visual reasoning tasks, yet their vulnerability...
1 weeks ago cs.CV cs.CL cs.CR
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial