[2602.14279] Whom to Query for What: Adaptive Group Elicitation via Multi-Turn LLM Interactions
Summary
The paper discusses an adaptive group elicitation framework using multi-turn interactions with large language models (LLMs) to optimize respondent selection and question allocation for better data collection.
Why It Matters
This research addresses the challenges of efficiently gathering accurate group-level insights in situations with limited resources and incomplete data. By leveraging LLMs, the proposed method enhances the quality of responses while minimizing costs, which is crucial for fields relying on survey data and collective assessments.
Key Takeaways
- Introduces a framework for adaptive group elicitation using LLMs.
- Optimizes both question selection and respondent choice to improve data quality.
- Demonstrates significant gains in response prediction accuracy with constrained budgets.
Computer Science > Machine Learning arXiv:2602.14279 (cs) [Submitted on 15 Feb 2026] Title:Whom to Query for What: Adaptive Group Elicitation via Multi-Turn LLM Interactions Authors:Ruomeng Ding, Tianwei Gao, Thomas P. Zollo, Eitan Bachmat, Richard Zemel, Zhun Deng View a PDF of the paper titled Whom to Query for What: Adaptive Group Elicitation via Multi-Turn LLM Interactions, by Ruomeng Ding and 5 other authors View PDF HTML (experimental) Abstract:Eliciting information to reduce uncertainty about latent group-level properties from surveys and other collective assessments requires allocating limited questioning effort under real costs and missing data. Although large language models enable adaptive, multi-turn interactions in natural language, most existing elicitation methods optimize what to ask with a fixed respondent pool, and do not adapt respondent selection or leverage population structure when responses are partial or incomplete. To address this gap, we study adaptive group elicitation, a multi-round setting where an agent adaptively selects both questions and respondents under explicit query and participation budgets. We propose a theoretically grounded framework that combines (i) an LLM-based expected information gain objective for scoring candidate questions with (ii) heterogeneous graph neural network propagation that aggregates observed responses and participant attributes to impute missing responses and guide per-round respondent selection. This closed-loop...