Existing research on LLM agent security mainly focuses on prompt injection and unsafe input/output behaviors. However, as agents increasingly rely on...
Fariha Tanjim Shifat, Hariswar Baburaj, Ce Zhou +2 more
Large language models (LLMs) are increasingly embedded in open-source software (OSS) ecosystems, creating complex interactions among natural language...
Abinitha Gourabathina, Inkit Padhi, Manish Nagireddy +2 more
For Large Language Models (LLMs) to be reliably deployed, models must effectively know when not to answer: abstain. Reasoning models, in particular,...
Retrieval-Augmented Language Models (RALMs) have demonstrated significant potential in knowledge-intensive tasks; however, they remain vulnerable to...
Matteo Migliarini, Joaquin Pereira Pizzini, Luca Moresca +3 more
Instrumental convergence predicts that sufficiently advanced AI agents will resist shutdown, yet current safety training (RLHF) may obscure this risk...
As TLS 1.3 encryption limits traditional Deep Packet Inspection (DPI), the security community has pivoted to Euclidean Transformer-based classifiers...
Shams Tarek, Dipayan Saha, Khan Thamid Hasan +3 more
The increasing complexity of modern system-on-chip designs amplifies hardware security risks and makes manual security property specification a major...