[2603.01550] Extracting Training Dialogue Data from Large Language Model based Task Bots
About this article
Abstract page for arXiv paper 2603.01550: Extracting Training Dialogue Data from Large Language Model based Task Bots
Computer Science > Computation and Language arXiv:2603.01550 (cs) [Submitted on 2 Mar 2026] Title:Extracting Training Dialogue Data from Large Language Model based Task Bots Authors:Shuo Zhang, Junzhou Zhao, Junji Hou, Pinghui Wang, Chenxu Wang, Jing Tao View a PDF of the paper titled Extracting Training Dialogue Data from Large Language Model based Task Bots, by Shuo Zhang and 5 other authors View PDF HTML (experimental) Abstract:Large Language Models (LLMs) have been widely adopted to enhance Task-Oriented Dialogue Systems (TODS) by modeling complex language patterns and delivering contextually appropriate responses. However, this integration introduces significant privacy risks, as LLMs, functioning as soft knowledge bases that compress extensive training data into rich knowledge representations, can inadvertently memorize training dialogue data containing not only identifiable information such as phone numbers but also entire dialogue-level events like complete travel schedules. Despite the critical nature of this privacy concern, how LLM memorization is inherited in developing task bots remains unexplored. In this work, we address this gap through a systematic quantitative study that involves evaluating existing training data extraction attacks, analyzing key characteristics of task-oriented dialogue modeling that render existing methods ineffective, and proposing novel attack techniques tailored for LLM-based TODS that enhance both response sampling and membership in...