[2603.29890] Interview-Informed Generative Agents for Product Discovery: A Validation Study
About this article
Abstract page for arXiv paper 2603.29890: Interview-Informed Generative Agents for Product Discovery: A Validation Study
Computer Science > Human-Computer Interaction arXiv:2603.29890 (cs) [Submitted on 10 Mar 2026] Title:Interview-Informed Generative Agents for Product Discovery: A Validation Study Authors:Zichao Wang, Alexa Siu View a PDF of the paper titled Interview-Informed Generative Agents for Product Discovery: A Validation Study, by Zichao Wang and Alexa Siu View PDF HTML (experimental) Abstract:Large language models (LLMs) have shown strong performance on standardized social science instruments, but their value for product discovery remains unclear. We investigate whether interview-informed generative agents can simulate user responses in concept testing scenarios. Using in-depth workflow interviews with knowledge workers, we created personalized agents and compared their evaluations of novel AI concepts against the same participants' responses. Our results show that agents are distribution-calibrated but identity-imprecise: they fail to replicate the specific individual they are grounded in, yet approximate population-level response distributions. These findings highlight both the potential and the limits of LLM simulation in design research. While unsuitable as a substitute for individual-level insights, simulation may provide value for early-stage concept screening and iteration, where distributional accuracy suffices. We discuss implications for integrating simulation responsibly into product development workflows. Comments: Subjects: Human-Computer Interaction (cs.HC); Artific...