LLM-Powered Detection of Price Manipulation in DeFi
Lu Liu, Wuqi Zhang, Lili Wei +3 more
Decentralized Finance (DeFi) smart contracts manage billions of dollars, making them a prime target for exploits. Price manipulation vulnerabilities,...
2,077+ academic papers on AI security, attacks, and defenses
Showing 141–160 of 179 papers
Clear filtersLu Liu, Wuqi Zhang, Lili Wei +3 more
Decentralized Finance (DeFi) smart contracts manage billions of dollars, making them a prime target for exploits. Price manipulation vulnerabilities,...
Nils Philipp Walter, Chawin Sitawarin, Jamie Hayes +2 more
Large Language Models (LLMs) are increasingly deployed in agentic systems that interact with an external environment; this makes them susceptible to...
Yulong Chen, Yadong Liu, Jiawen Zhang +3 more
Large Language Models (LLMs), despite advances in safety alignment, remain vulnerable to jailbreak attacks designed to circumvent protective...
Hanbin Hong, Ashish Kundu, Ali Payani +2 more
Randomized smoothing has become essential for achieving certified adversarial robustness in machine learning models. However, current methods...
Runlin Lei, Lu Yi, Mingguo He +4 more
While Graph Neural Networks (GNNs) and Large Language Models (LLMs) are powerful approaches for learning on Text-Attributed Graphs (TAGs), a...
Qiusi Zhan, Angeline Budiman-Chan, Abdelrahman Zayed +3 more
Large language model (LLM) based search agents iteratively generate queries, retrieve external information, and reason to answer open-domain...
Qiusi Zhan, Angeline Budiman-Chan, Abdelrahman Zayed +3 more
Large language model (LLM) based search agents iteratively generate queries, retrieve external information, and reason to answer open-domain...
Bo-Han Feng, Chien-Feng Liu, Yu-Hsuan Li Liang +9 more
Large audio-language models (LALMs) extend text-based LLMs with auditory understanding, offering new opportunities for multimodal applications. While...
Yang Feng, Xudong Pan
Malicious agents pose significant threats to the reliability and decision-making capabilities of Multi-Agent Systems (MAS) powered by Large Language...
Eduard Andrei Cristea, Petter Molnes, Jingyue Li
Malicious software attacks are having an increasingly significant economic impact. Commercial malware detection software can be costly, and tools...
Yuexiao Liu, Lijun Li, Xingjun Wang +1 more
Recent advancements in Reinforcement Learning with Verifiable Rewards (RLVR) have gained significant attention due to their objective and verifiable...
Ahmed Aly, Essam Mansour, Amr Youssef
Advanced Persistent Threats (APTs) are stealthy cyberattacks that often evade detection in system-level audit logs. Provenance graphs model these...
Issam Seddik, Sami Souihi, Mohamed Tamaazousti +1 more
As Large Language Models (LLMs) gain traction across critical domains, ensuring secure and trustworthy training processes has become a major concern....
Mason Nakamura, Abhinav Kumar, Saaduddin Mahmud +3 more
A multi-agent system (MAS) powered by large language models (LLMs) can automate tedious user tasks such as meeting scheduling that requires...
Jiarui Li, Yuhan Chai, Lei Du +3 more
Rule-based network intrusion detection systems play a crucial role in the real-time detection of Web attacks. However, most existing works primarily...
Ruben Belo, Marta Guimaraes, Claudia Soares
Large Language Models are susceptible to jailbreak attacks that bypass built-in safety guardrails (e.g., by tricking the model with adversarial...
Siyuan Li, Aodu Wulianghai, Xi Lin +4 more
With the increasing integration of large language models (LLMs) into open-domain writing, detecting machine-generated text has become a critical task...
Han Zhu, Juntao Dai, Jiaming Ji +8 more
With the widespread use of multi-modal Large Language models (MLLMs), safety issues have become a growing concern. Multi-turn dialogues, which are...
Jiahao Liu, Bonan Ruan, Xianglin Yang +5 more
LLM-based agents have demonstrated promising adaptability in real-world applications. However, these agents remain vulnerable to a wide range of...
Zhuochen Yang, Kar Wai Fok, Vrizlynn L. L. Thing
Large language models have gained widespread attention recently, but their potential security vulnerabilities, especially privacy leakage, are also...
Get breaking CVE alerts, compliance reports (ISO 42001, EU AI Act), and CISO risk assessments for your AI/ML stack.
Start 14-Day Free Trial