[2510.25662] User Misconceptions of LLM-Based Conversational Programming Assistants
About this article
Abstract page for arXiv paper 2510.25662: User Misconceptions of LLM-Based Conversational Programming Assistants
Computer Science > Human-Computer Interaction arXiv:2510.25662 (cs) [Submitted on 29 Oct 2025 (v1), last revised 26 Feb 2026 (this version, v2)] Title:User Misconceptions of LLM-Based Conversational Programming Assistants Authors:Gabrielle O'Brien, Antonio Pedro Santos Alves, Sebastian Baltes, Grischa Liebel, Mircea Lungu, Marcos Kalinowski View a PDF of the paper titled User Misconceptions of LLM-Based Conversational Programming Assistants, by Gabrielle O'Brien and 5 other authors View PDF HTML (experimental) Abstract:Programming assistants powered by large language models (LLMs) have become widely available, with conversational assistants like ChatGPT particularly accessible to novice programmers. However, varied tool capabilities and inconsistent availability of extensions (web search, code execution, retrieval-augmented generation) create opportunities for user misconceptions that may lead to over-reliance, unproductive practices, or insufficient quality control. We characterize misconceptions that users of conversational LLM-based assistants may have in programming contexts through a two-phase approach: first brainstorming and cataloging potential misconceptions, then conducting qualitative analysis of Python-programming conversations from the WildChat dataset. We find evidence that users have misplaced expectations about features like web access, code execution, and non-text outputs. We also note the potential for deeper conceptual issues around information requireme...