Tool MEDIUM
Shaona Ghosh, Barnaby Simkin, Kyriacos Shiarlis +9 more
This paper introduces a dynamic and actionable framework for securing agentic AI systems in enterprise deployment. We contend that safety and...
5 months ago cs.LG cs.AI cs.CR
PDF
Benchmark MEDIUM
Gauri Pradhan, Joonas Jälkö, Santiago Zanella-Bèguelin +1 more
Training machine learning models with differential privacy (DP) limits an adversary's ability to infer sensitive information about the training data....
5 months ago cs.CR cs.LG
PDF
Benchmark MEDIUM
Rebeka Toth, Tamas Bisztray, Richard Dubniczky
Phishing and spam emails remain a major cybersecurity threat, with attackers increasingly leveraging Large Language Models (LLMs) to craft highly...
5 months ago cs.CR cs.AI cs.DB
PDF
Benchmark MEDIUM
Rebeka Toth, Tamas Bisztray, Nils Gruschka
In this paper, we introduce a metadata-enriched generation framework (PhishFuzzer) that seeds real emails into Large Language Models (LLMs) to...
5 months ago cs.CR cs.AI cs.DB
PDF
Benchmark MEDIUM
Di Zhu, Chen Xie, Ziwei Wang +1 more
New York City reports over one hundred thousand motor vehicle collisions each year, creating substantial injury and public health burden. We present...
Attack MEDIUM
Herman Errico, Jiquan Ngiam, Shanita Sojan
The Model Context Protocol (MCP) replaces static, developer-controlled API integrations with more dynamic, user-driven agent systems, which also...
Survey MEDIUM
Jaehwan Park, Kyungchan Lim, Seonhye Park +1 more
The advent of Artificial Intelligence (AI), particularly large language models (LLMs), has revolutionized software development by enabling developers...
Other MEDIUM
Wei He, Kai Han, Hang Zhou +4 more
The optimization of large language models (LLMs) remains a critical challenge, particularly as model scaling exacerbates sensitivity to algorithmic...
5 months ago cs.LG cs.AI
PDF
Benchmark MEDIUM
Momoko Shiraishi, Yinzhi Cao, Takahiro Shinagawa
Command-line interface (CLI) fuzzing tests programs by mutating both command-line options and input file contents, thus enabling discovery of...
Attack MEDIUM
Sidahmed Benabderrahmane, James Cheney, Talal Rahwan
Advanced Persistent Threats (APTs) pose a significant challenge in cybersecurity due to their stealthy and long-term nature. Modern supervised...
5 months ago cs.LG cs.AI cs.CR
PDF
Benchmark MEDIUM
Xuebo Qiu, Mingqi Lv, Yimei Zhang +4 more
Provenance-based threat hunting identifies Advanced Persistent Threats (APTs) on endpoints by correlating attack patterns described in Cyber Threat...
Benchmark MEDIUM
David Amebley, Sayanton Dibbo
In the age of agentic AI, the growing deployment of multi-modal models (MMs) has introduced new attack vectors that can leak sensitive training data...
5 months ago cs.CV cs.AI cs.CR
PDF
Benchmark MEDIUM
Abhijeet Pathak, Suvadra Barua, Dinesh Gudimetla +4 more
Large language models (LLMs) and autonomous coding agents are increasingly used to generate software across a wide range of domains. Yet a core...
5 months ago cs.SE cs.AI cs.CR
PDF
Attack MEDIUM
Steven Peh
Large Language Models (LLMs) remain vulnerable to prompt injection attacks, representing the most significant security threat in production...
5 months ago cs.CR cs.AI
PDF
Benchmark MEDIUM
Angelo Gaspar Diniz Nogueira, Kayua Oleques Paim, Hendrio Bragança +2 more
The ever-increasing number of Android devices and the accelerated evolution of malware, reaching over 35 million samples by 2024, highlight the...
5 months ago cs.CR cs.AI cs.LG
PDF
Benchmark MEDIUM
Yu Cui, Yifei Liu, Hang Fu +4 more
Research on the safety evaluation of large language models (LLMs) has become extensive, driven by jailbreak studies that elicit unsafe responses....
Benchmark MEDIUM
Rong Feng, Suman Saha
Obfuscation poses a persistent challenge for software engineering tasks such as program comprehension, maintenance, testing, and vulnerability...
Benchmark MEDIUM
Andrew Maranhão Ventura D'addario
The integration of Large Language Models (LLMs) into healthcare demands a safety paradigm rooted in \textit{primum non nocere}. However, current...
5 months ago cs.CY cs.AI cs.CL
PDF
Defense MEDIUM
Junbo Zhang, Ran Chen, Qianli Zhou +2 more
Large language models demonstrate powerful capabilities across various natural language processing tasks, yet they also harbor safety...
5 months ago cs.CR cs.CL
PDF
Defense MEDIUM
Onat Gungor, Roshan Sood, Jiasheng Zhou +1 more
Large Language Models (LLMs) are highly effective for cybersecurity question answering (QA) but are difficult to deploy on edge devices due to their...
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial