The dual offensive and defensive utility of Large Language Models (LLMs) highlights a critical gap in AI security: the lack of unified frameworks for...
Nirhoshan Sivaroopan, Kanchana Thilakarathna, Albert Zomaya +6 more
Sponge attacks increasingly threaten LLM systems by inducing excessive computation and DoS. Existing defenses either rely on statistical filters that...
Satyapriya Krishna, Matteo Memelli, Tong Wang +5 more
Amazon published its Frontier Model Safety Framework (FMSF) as part of the Paris AI summit, following which we presented a report on Amazon's Premier...
The Model Context Protocol (MCP) has emerged as a de facto standard for integrating Large Language Models with external tools, yet no formal security...
Large Language Model (LLM)-based question-answering systems offer significant potential for automating customer support and internal knowledge access...
Loop vulnerabilities are one major risky construct in software development. They can easily lead to infinite loops or executions, exhaust resources,...
With the spread of generative AI in recent years, attacks known as Whaling have become a serious threat. Whaling is a form of social engineering that...
Hossein Naderi, Alireza Shojaei, Lifu Huang +3 more
Robots are expected to play a major role in the future construction industry but face challenges due to high costs and difficulty adapting to dynamic...
Large Language Model (LLM)-based agent systems are increasingly deployed for complex real-world tasks but remain vulnerable to natural language-based...
The agent--tool interaction loop is a critical attack surface for modern Large Language Model (LLM) agents. Existing denial-of-service (DoS) attacks...