Benchmark MEDIUM
Ishan Kavathekar, Hemang Jain, Ameya Rathod +2 more
Large Language Models (LLMs) have demonstrated strong capabilities as autonomous agents through tool use, planning, and decision-making abilities,...
6 months ago cs.MA cs.AI
PDF
Benchmark MEDIUM
Hadi Reisizadeh, Jiajun Ruan, Yiwei Chen +3 more
Unlearning in large language models (LLMs) is critical for regulatory compliance and for building ethical generative AI systems that avoid producing...
Benchmark MEDIUM
Cyril Vallez, Alexander Sternfeld, Andrei Kucharavy +1 more
As the role of Large Language Models (LLM)-based coding assistants in software development becomes more critical, so does the role of the bugs they...
Attack MEDIUM
Raunak Somani, Aswani Kumar Cherukuri
This paper studies the integration off Large Language Models into cybersecurity tools and protocols. The main issue discussed in this paper is how...
Attack MEDIUM
Pedro Pereira, José Gouveia, João Vitorino +2 more
Magecart skimming attacks have emerged as a significant threat to client-side security and user trust in online payment systems. This paper addresses...
Tool MEDIUM
Tim Beyer, Jonas Dornbusch, Jakob Steimle +3 more
The rapid expansion of research on Large Language Model (LLM) safety and robustness has produced a fragmented and oftentimes buggy ecosystem of...
6 months ago cs.AI cs.SE
PDF
Defense MEDIUM
Oshando Johnson, Alexandra Fomina, Ranjith Krishnamurthy +3 more
The prevalence of security vulnerabilities has prompted companies to adopt static application security testing (SAST) tools for vulnerability...
6 months ago cs.SE cs.AI
PDF
Other MEDIUM
Hirohane Takagi, Gouki Minegishi, Shota Kizawa +2 more
Although behavioral studies have documented numerical reasoning errors in large language models (LLMs), the underlying representational mechanisms...
Benchmark MEDIUM
Shiyin Lin
Software fuzzing has become a cornerstone in automated vulnerability discovery, yet existing mutation strategies often lack semantic awareness,...
6 months ago cs.CR cs.AI
PDF
Defense MEDIUM
Mohammad Atif Quamar, Mohammad Areeb, Mikhail Kuznetsov +2 more
Aligning large language models (LLMs) with human values is crucial for safe deployment. Inference-time techniques offer granular control over...
Attack MEDIUM
Botao 'Amber' Hu, Helena Rong
As the "agentic web" takes shape-billions of AI agents (often LLM-powered) autonomously transacting and collaborating-trust shifts from human...
6 months ago cs.HC cs.AI cs.MA
PDF
Benchmark MEDIUM
Jon Kutasov, Chloe Loughridge, Yuqi Sun +4 more
As AI systems become more capable and widely deployed as agents, ensuring their safe operation becomes critical. AI control offers one approach to...
Attack MEDIUM
W. K. M Mithsara, Ning Yang, Ahmed Imteaj +2 more
The widespread integration of wearable sensing devices in Internet of Things (IoT) ecosystems, particularly in healthcare, smart homes, and...
6 months ago cs.LG cs.CR
PDF
Attack MEDIUM
Roy Rinberg, Adam Karvonen, Alexander Hoover +2 more
As large AI models become increasingly valuable assets, the risk of model weight exfiltration from inference servers grows accordingly. An attacker...
6 months ago cs.CR cs.LG
PDF
Benchmark MEDIUM
Patrick Karlsen, Even Eilertsen
This paper investigates some of the risks introduced by "LLM poisoning," the intentional or unintentional introduction of malicious or biased data...
6 months ago cs.CR cs.AI
PDF
Benchmark MEDIUM
Hanzhong Liang, Yue Duan, Xing Su +5 more
As the Web3 ecosystem evolves toward a multi-chain architecture, cross-chain bridges have become critical infrastructure for enabling...
Other MEDIUM
Sogol Masoumzadeh
Timely identification of issue reports reflecting software vulnerabilities is crucial, particularly for Internet-of-Things (IoT) where analysis is...
6 months ago cs.SE cs.AI cs.CR
PDF
Other MEDIUM
Yuhan Cao, Yu Wang, Sitong Liu +3 more
The widespread adoption of Large Language Models (LLMs) through Application Programming Interfaces (APIs) induces a critical vulnerability: the...
6 months ago cs.GT cs.AI
PDF
Attack MEDIUM
Kasimir Schulz, Amelia Kawasaki, Leo Ring
Large language models (LLMs) are widely deployed across various applications, often with safeguards to prevent the generation of harmful or...
6 months ago cs.CR cs.AI
PDF
Benchmark MEDIUM
Ariyan Hossain, Khondokar Mohammad Ahanaf Hannan, Rakinul Haque +4 more
Gender bias in language models has gained increasing attention in the field of natural language processing. Encoder-based transformer models, which...
Track AI security vulnerabilities in real time
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act),
and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial