Survey LOW relevance

AI Deception: Risks, Dynamics, and Controls

Boyuan Chen (Jay) Sitong Fang (Jay) Jiaming Ji (Jay) Yanxu Zhu (Jay) Pengcheng Wen (Jay) Jinzhou Wu (Jay) Yingshui Tan (Jay) Boren Zheng (Jay) Mengying Yuan (Jay) Wenqi Chen (Jay) Donghai Hong (Jay) Alex Qiu (Jay) Xin Chen (Jay) Jiayi Zhou (Jay) Kaile Wang (Jay) Juntao Dai (Jay) Borong Zhang (Jay) Tianzhuo Yang (Jay) Saad Siddiqui (Jay) Isabella Duan (Jay) Yawen Duan (Jay) Brian Tse (Jay) Jen-Tse (Jay) Huang Kun Wang Baihui Zheng Jiaheng Liu Jian Yang Yiming Li Wenting Chen Dongrui Liu Lukas Vierling Zhiheng Xi Haobo Fu Wenxuan Wang Jitao Sang Zhengyan Shi Chi-Min Chan Eugenie Shi Simin Li Juncheng Li Jian Yang Wei Ji Dong Li Jinglin Yang Jun Song Yinpeng Dong Jie Fu Bo Zheng Min Yang Yike Guo Philip Torr Robert Trager Yi Zeng Zhongyuan Wang Yaodong Yang Tiejun Huang Ya-Qin Zhang Hongjiang Zhang Andrew Yao
Published
November 27, 2025
Updated
December 3, 2025

Abstract

As intelligence increases, so does its shadow. AI deception, in which systems induce false beliefs to secure self-beneficial outcomes, has evolved from a speculative concern to an empirically demonstrated risk across language models, AI agents, and emerging frontier systems. This project provides a comprehensive and up-to-date overview of the AI deception field, covering its core concepts, methodologies, genesis, and potential mitigations. First, we identify a formal definition of AI deception, grounded in signaling theory from studies of animal deception. We then review existing empirical studies and associated risks, highlighting deception as a sociotechnical safety challenge. We organize the landscape of AI deception research as a deception cycle, consisting of two key components: deception emergence and deception treatment. Deception emergence reveals the mechanisms underlying AI deception: systems with sufficient capability and incentive potential inevitably engage in deceptive behaviors when triggered by external conditions. Deception treatment, in turn, focuses on detecting and addressing such behaviors. On deception emergence, we analyze incentive foundations across three hierarchical levels and identify three essential capability preconditions required for deception. We further examine contextual triggers, including supervision gaps, distributional shifts, and environmental pressures. On deception treatment, we conclude detection methods covering benchmarks and evaluation protocols in static and interactive settings. Building on the three core factors of deception emergence, we outline potential mitigation strategies and propose auditing approaches that integrate technical, community, and governance efforts to address sociotechnical challenges and future AI risks. To support ongoing work in this area, we release a living resource at www.deceptionsurvey.com.

Pro Analysis

Full threat analysis, ATLAS technique mapping, compliance impact assessment (ISO 42001, EU AI Act), and actionable recommendations are available with a Pro subscription.

Threat Deep-Dive
ATLAS Mapping
Compliance Reports
Actionable Recommendations
Start 14-Day Free Trial