[2603.27435] Improving Attributed Long-form Question Answering with Intent Awareness
About this article
Abstract page for arXiv paper 2603.27435: Improving Attributed Long-form Question Answering with Intent Awareness
Computer Science > Computation and Language arXiv:2603.27435 (cs) [Submitted on 28 Mar 2026] Title:Improving Attributed Long-form Question Answering with Intent Awareness Authors:Xinran Zhao, Aakanksha Naik, Jay DeYoung, Joseph Chee Chang, Jena D. Hwang, Tongshuang Wu, Varsha Kishore View a PDF of the paper titled Improving Attributed Long-form Question Answering with Intent Awareness, by Xinran Zhao and 6 other authors View PDF HTML (experimental) Abstract:Large language models (LLMs) are increasingly being used to generate comprehensive, knowledge-intensive reports. However, while these models are trained on diverse academic papers and reports, they are not exposed to the reasoning processes and intents that guide authors in crafting these documents. We hypothesize that enhancing a model's intent awareness can significantly improve the quality of generated long-form reports. We develop and employ structured, tag-based schemes to better elicit underlying implicit intents to write or cite. We demonstrate that these extracted intents enhance both zero-shot generation capabilities in LLMs and enable the creation of high-quality synthetic data for fine-tuning smaller models. Our experiments reveal improved performance across various challenging scientific report generation tasks, with an average improvement of +2.9 and +12.3 absolute points for large and small models over baselines, respectively. Furthermore, our analysis illuminates how intent awareness enhances model citati...