Poisoned Identifiers Survive LLM Deobfuscation: A Case Study on Claude Opus 4.6
Luis Guzmán Lorenzo
When an LLM deobfuscates JavaScript, can poisoned identifier names in the string table survive into the model's reconstructed code, even when the...
2,560+ academic papers on AI security, attacks, and defenses
Showing 401–420 of 2,560 papers
Luis Guzmán Lorenzo
When an LLM deobfuscates JavaScript, can poisoned identifier names in the string table survive into the model's reconstructed code, even when the...
Fariha Tanjim Shifat, Hariswar Baburaj, Ce Zhou +2 more
Large language models (LLMs) are increasingly embedded in open-source software (OSS) ecosystems, creating complex interactions among natural language...
Qiqing Huang, Xingyu Wang, Wanda Guo +2 more
Modern 5G user equipment (UE) processes Radio Resource Control (RRC) configuration messages during early control-plane exchanges, before...
Aobo Chen, Chenxu Zhao, Chenglin Miao +1 more
Large language models (LLMs) possess strong semantic understanding, driving significant progress in data mining applications. This is further...
Siyuan Li, Zehao Liu, Xi Lin +6 more
As Large Language Models (LLMs) are increasingly deployed in complex applications, their vulnerability to adversarial attacks raises urgent safety...
Abinitha Gourabathina, Inkit Padhi, Manish Nagireddy +2 more
For Large Language Models (LLMs) to be reliably deployed, models must effectively know when not to answer: abstain. Reasoning models, in particular,...
Jaemin Kim, Jae O Lee, Sumyeong Ahn +1 more
Retrieval-Augmented Language Models (RALMs) have demonstrated significant potential in knowledge-intensive tasks; however, they remain vulnerable to...
Matteo Migliarini, Joaquin Pereira Pizzini, Luca Moresca +3 more
Instrumental convergence predicts that sufficiently advanced AI agents will resist shutdown, yet current safety training (RLHF) may obscure this risk...
Vickson Ferrel
As TLS 1.3 encryption limits traditional Deep Packet Inspection (DPI), the security community has pivoted to Euclidean Transformer-based classifiers...
Jihoon Jeong
AI models of equivalent capability can exhibit fundamentally different behavioral patterns, yet no standardized instrument exists to measure these...
Ayush Garg, Sophia Hager, Jacob Montiel +5 more
Security teams face a challenge: the volume of newly disclosed Common Vulnerabilities and Exposures (CVEs) far exceeds the capacity to manually...
O. Clerc, R. Abdelghani, C. Desvaux +3 more
The rapid adoption of generative artificial intelligence (GenAI) in schools raises concerns about students' uncritical reliance on its outputs....
Yiheng Huang, Zhijia Zhao, Bihuan Chen +5 more
The model context protocol (MCP) standardizes how LLMs connect to external tools and data sources, enabling faster integration but introducing new...
Yukai Ma, Honglin He, Selina Song +2 more
Long-horizon navigation in complex urban environments relies heavily on continuous human operation, which leads to fatigue, reduced efficiency, and...
Shams Tarek, Dipayan Saha, Khan Thamid Hasan +3 more
The increasing complexity of modern system-on-chip designs amplifies hardware security risks and makes manual security property specification a major...
Zikai Zhang, Rui Hu, Olivera Kotevska +1 more
Large Language Models (LLMs) are powerful tools for answering user queries, yet they remain highly vulnerable to jailbreak attacks. Existing...
Weidi Luo, Xiaofei Wen, Tenghao Huang +5 more
Large language models (LLMs) are increasingly deployed for everyday tasks, including food preparation and health-related guidance. However, food...
Bowen Wei, Yunbei Zhang, Jinhao Pan +5 more
Personal AI agents like OpenClaw run with elevated privileges on users' local machines, where a single successful prompt injection can leak...
Tiankai Yang, Jiate Li, Yi Nian +5 more
LLM-based agents increasingly operate across repeated sessions, maintaining task states to ensure continuity. In many deployments, a single agent...
Manoj Parmar
World models -- learned internal simulators of environment dynamics -- are rapidly becoming foundational to autonomous decision-making in robotics,...
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act), and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial