[2602.23610] LLM-Driven Multi-Turn Task-Oriented Dialogue Synthesis for Realistic Reasoning
About this article
Abstract page for arXiv paper 2602.23610: LLM-Driven Multi-Turn Task-Oriented Dialogue Synthesis for Realistic Reasoning
Computer Science > Computation and Language arXiv:2602.23610 (cs) [Submitted on 27 Feb 2026] Title:LLM-Driven Multi-Turn Task-Oriented Dialogue Synthesis for Realistic Reasoning Authors:Yu Zhu, Kai Yang View a PDF of the paper titled LLM-Driven Multi-Turn Task-Oriented Dialogue Synthesis for Realistic Reasoning, by Yu Zhu and 1 other authors View PDF HTML (experimental) Abstract:The reasoning capability of large language models (LLMs), defined as their ability to analyze, infer, and make decisions based on input information, is essential for building intelligent task-oriented dialogue systems. However, existing benchmarks do not sufficiently reflect the complexity of real-world scenarios, which limits their effectiveness in evaluating and enhancing LLM reasoning in practical contexts. Many current reasoning datasets are overly simplistic and abstract, often disconnected from realistic task flows, domain constraints, and operational rules, making it difficult to effectively evaluate LLMs' logical reasoning ability. In addition, data contamination from pretraining corpora undermines the reliability of evaluation results, and traditional crowdsourcing methods for dataset construction are labor-intensive and difficult to scale. To address these challenges, we propose a LLM-driven framework for synthesizing multi-turn, task-oriented dialogues grounded in realistic reasoning scenarios, leveraging trilevel optimization to enhance dialogue quality. Our method generates dialogues g...