[2602.14279] Whom to Query for What: Adaptive Group Elicitation via Multi-Turn LLM Interactions

[2602.14279] Whom to Query for What: Adaptive Group Elicitation via Multi-Turn LLM Interactions

arXiv - AI 4 min read Article

Summary

The paper discusses an adaptive group elicitation framework using multi-turn interactions with large language models (LLMs) to optimize respondent selection and question allocation for better data collection.

Why It Matters

This research addresses the challenges of efficiently gathering accurate group-level insights in situations with limited resources and incomplete data. By leveraging LLMs, the proposed method enhances the quality of responses while minimizing costs, which is crucial for fields relying on survey data and collective assessments.

Key Takeaways

  • Introduces a framework for adaptive group elicitation using LLMs.
  • Optimizes both question selection and respondent choice to improve data quality.
  • Demonstrates significant gains in response prediction accuracy with constrained budgets.

Computer Science > Machine Learning arXiv:2602.14279 (cs) [Submitted on 15 Feb 2026] Title:Whom to Query for What: Adaptive Group Elicitation via Multi-Turn LLM Interactions Authors:Ruomeng Ding, Tianwei Gao, Thomas P. Zollo, Eitan Bachmat, Richard Zemel, Zhun Deng View a PDF of the paper titled Whom to Query for What: Adaptive Group Elicitation via Multi-Turn LLM Interactions, by Ruomeng Ding and 5 other authors View PDF HTML (experimental) Abstract:Eliciting information to reduce uncertainty about latent group-level properties from surveys and other collective assessments requires allocating limited questioning effort under real costs and missing data. Although large language models enable adaptive, multi-turn interactions in natural language, most existing elicitation methods optimize what to ask with a fixed respondent pool, and do not adapt respondent selection or leverage population structure when responses are partial or incomplete. To address this gap, we study adaptive group elicitation, a multi-round setting where an agent adaptively selects both questions and respondents under explicit query and participation budgets. We propose a theoretically grounded framework that combines (i) an LLM-based expected information gain objective for scoring candidate questions with (ii) heterogeneous graph neural network propagation that aggregates observed responses and participant attributes to impute missing responses and guide per-round respondent selection. This closed-loop...

Related Articles

Llms

Claude Max 20x usage hit 40% by Monday noon — how does Codex CLI compare?

I'm on Claude Max (the $100/mo plan) and noticed something that surprised me. By Monday noon I had already used 40% of the 20x monthly li...

Reddit - Artificial Intelligence · 1 min ·
How to use the new ChatGPT app integrations, including DoorDash, Spotify, Uber, and others | TechCrunch
Llms

How to use the new ChatGPT app integrations, including DoorDash, Spotify, Uber, and others | TechCrunch

Learn how to use Spotify, Canva, Figma, Expedia, and other apps directly in ChatGPT.

TechCrunch - AI · 10 min ·
Anthropic Restricts Claude Agent Access Amid AI Automation Boom in Crypto
Llms

Anthropic Restricts Claude Agent Access Amid AI Automation Boom in Crypto

AI Tools & Products · 7 min ·
Is cutting ‘please’ when talking to ChatGPT better for the planet? An expert explains
Llms

Is cutting ‘please’ when talking to ChatGPT better for the planet? An expert explains

AI Tools & Products · 5 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime