[2509.17766] A State-Update Prompting Strategy for Efficient and Robust Multi-turn Dialogue
About this article
Abstract page for arXiv paper 2509.17766: A State-Update Prompting Strategy for Efficient and Robust Multi-turn Dialogue
Computer Science > Computation and Language arXiv:2509.17766 (cs) [Submitted on 22 Sep 2025 (v1), last revised 7 Apr 2026 (this version, v2)] Title:A State-Update Prompting Strategy for Efficient and Robust Multi-turn Dialogue Authors:Ziyi Liu View a PDF of the paper titled A State-Update Prompting Strategy for Efficient and Robust Multi-turn Dialogue, by Ziyi Liu View PDF HTML (experimental) Abstract:Large Language Models (LLMs) struggle with information forgetting and inefficiency in long-horizon, multi-turn dialogues. To address this, we propose a training-free prompt engineering method, the State-Update Multi-turn Dialogue Strategy. It utilizes "State Reconstruction" and "History Remind" mechanisms to effectively manage dialogue history. Our strategy shows strong performance across multiple multi-hop QA datasets. For instance, on the HotpotQA dataset, it improves the core information filtering score by 32.6%, leading to a 14.1% increase in the downstream QA score, while also reducing inference time by 73.1% and token consumption by 59.4%. Ablation studies confirm the pivotal roles of both components. Our work offers an effective solution for optimizing LLMs in long-range interactions, providing new insights for developing more robust Agents. Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI) Cite as: arXiv:2509.17766 [cs.CL] (or arXiv:2509.17766v2 [cs.CL] for this version) https://doi.org/10.48550/arXiv.2509.17766 Focus to learn more arXiv-...