Defense LOW
Vahideh Zolfaghari
Large language models (LLMs) are increasingly consulted by parents for pediatric guidance, yet their safety under real-world adversarial pressures is...
Benchmark MEDIUM
Adam Kaufman, James Lucassen, Tyler Tracy +2 more
Future AI agents might run autonomously with elevated privileges. If these agents are misaligned, they might abuse these privileges to cause serious...
3 months ago cs.CR cs.AI
PDF
Attack HIGH
Joao Queiroz
Recent evidence shows that the versification of prompts constitutes a highly effective adversarial mechanism against aligned LLMs. The study...
3 months ago cs.CL cs.AI
PDF
Attack MEDIUM
Seok-Hyun Ga, Chun-Yen Chang
The rapid development of Generative AI is bringing innovative changes to education and assessment. As the prevalence of students utilizing AI for...
3 months ago cs.AI cs.CL cs.CY
PDF
Survey LOW
Siva Sai, Ishika Goyal, Shubham Sharma +3 more
The increasing number of cyber threats and rapidly evolving tactics, as well as the high volume of data in recent years, have caused classical...
3 months ago cs.LG cs.AI cs.CR
PDF
Benchmark MEDIUM
Xuanjun Zong, Zhiqi Shen, Lei Wang +2 more
Large language models (LLMs) are evolving into agentic systems that reason, plan, and operate external tools. The Model Context Protocol (MCP) is a...
3 months ago cs.CL cs.AI
PDF
Attack HIGH
Pablo Montaña-Fernández, Ines Ortega-Fernandez
Federated Learning is a machine learning setting that reduces direct data exposure, improving the privacy guarantees of machine learning models. Yet,...
3 months ago cs.LG cs.CR
PDF
Tool MEDIUM
Richard Helder Moulton, Austin O'Brien, John D. Hastings
Although large language models (LLMs) are increasingly used in security-critical workflows, practitioners lack quantitative guidance on which...
3 months ago cs.CR cs.AI cs.CL
PDF
Survey MEDIUM
Xinyu Huang, Shyam Karthick V B, Taozhao Chen +5 more
The integration of Large Language Models (LLMs) into robotics has revolutionized their ability to interpret complex human commands and execute...
Benchmark LOW
Edward Y. Chang
Large Language Models exhibit sycophancy: prioritizing agreeableness over correctness. Current remedies evaluate reasoning outcomes: RLHF rewards...
3 months ago cs.CL cs.AI
PDF
Defense MEDIUM
Nnamdi Philip Okonkwo, Lubna Luxmi Dhirani
Cloud Security Operations Center (SOC) enable cloud governance, risk and compliance by providing insights visibility and control. Cloud SOC triages...
3 months ago cs.CR cs.LG
PDF
Benchmark LOW
Sahibpreet Singh, Shikha Dhiman
The integration of generative Artificial Intelligence into the digital ecosystem necessitates a critical re-evaluation of Indian criminal...
3 months ago cs.CR cs.AI cs.CY
PDF
Tool MEDIUM
Viet K. Nguyen, Mohammad I. Husain
Agentic AI introduces security vulnerabilities that traditional LLM safeguards fail to address. Although recent work by Unit 42 at Palo Alto Networks...
3 months ago cs.CR cs.AI
PDF
Tool MEDIUM
Arth Bhardwaj, Sia Godika, Yuvam Loonker
Traditional, centralized security tools often miss adaptive, multi-vector attacks. We present the Multi-Agent LLM Cyber Defense Framework (MALCDF), a...
3 months ago cs.CR cs.AI
PDF
Benchmark MEDIUM
Yihan Liao, Jacky Keung, Xiaoxue Ma +2 more
The rapid advancement of Large Language Models (LLMs) has been driven by extensive datasets that may contain sensitive information, raising serious...
Defense MEDIUM
Teodor Poncu, Ioana Pintilie, Marius Dragoi +2 more
Large Language Models (LLMs) typically excel at coding tasks involving high-level programming languages, as opposed to lower-level programming...
3 months ago cs.CL cs.LG
PDF
Attack MEDIUM
Piercosma Bisconti, Marcello Galisai, Matteo Prandi +6 more
Safety mechanisms in LLMs remain vulnerable to attacks that reframe harmful requests through culturally coded structures. We introduce Adversarial...
3 months ago cs.CL cs.AI cs.CY
PDF
Attack HIGH
Xingfu Zhou, Pengfei Wang
Large Language Model (LLM) agents relying on external retrieval are increasingly deployed in high-stakes environments. While existing adversarial...
3 months ago cs.CR cs.AI
PDF
Benchmark MEDIUM
Ruozhao Yang, Mingfei Cheng, Gelei Deng +3 more
Penetration testing is essential for assessing and strengthening system security against real-world threats, yet traditional workflows remain highly...
3 months ago cs.SE cs.AI cs.CR
PDF
Attack HIGH
Yunhao Yao, Zhiqiang Wang, Haoran Cheng +3 more
The evolution of Large Language Models (LLMs) into Agentic AI has established the Model Context Protocol (MCP) as the standard for connecting...
3 months ago cs.CR cs.AI
PDF
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial