Loop vulnerabilities are one major risky construct in software development. They can easily lead to infinite loops or executions, exhaust resources,...
Split Learning (SL) offers a framework for collaborative model training that respects data privacy by allowing participants to share the same dataset...
Prompt injection remains a central obstacle to the safe deployment of large language models, particularly in multi-agent settings where intermediate...
Background: While Large Language Models (LLMs) have achieved widespread adoption, malicious prompt engineering specifically "jailbreak attacks" poses...
The demand of customized large language models (LLMs) has led to commercial LLMs offering black-box fine-tuning APIs, yet this convenience introduces...
Anirudh Sekar, Mrinal Agarwal, Rachel Sharma +4 more
Prompt injection attacks have become an increasing vulnerability for LLM applications, where adversarial prompts exploit indirect input channels such...
Retrieval-Augmented Generation (RAG) has attracted significant attention due to its ability to combine the generative capabilities of Large Language...
Chetan Pathade, Vinod Dhimam, Sheheryar Ahmad +1 more
Serverless computing has achieved widespread adoption, with over 70% of AWS organizations using serverless solutions [1]. Meanwhile, machine learning...