[2510.20635] Why Did Apple Fall: Evaluating Curiosity in Large Language Models
About this article
Abstract page for arXiv paper 2510.20635: Why Did Apple Fall: Evaluating Curiosity in Large Language Models
Computer Science > Computation and Language arXiv:2510.20635 (cs) [Submitted on 23 Oct 2025 (v1), last revised 14 Apr 2026 (this version, v2)] Title:Why Did Apple Fall: Evaluating Curiosity in Large Language Models Authors:Haoyu Wang, Sihang Jiang, Yuyan Chen, Xiaojun Meng, Jiansheng Wei, Yitong Wang, Yanghua Xiao View a PDF of the paper titled Why Did Apple Fall: Evaluating Curiosity in Large Language Models, by Haoyu Wang and 6 other authors View PDF HTML (experimental) Abstract:Curiosity serves as a pivotal conduit for human beings to discover and learn new knowledge. Recent advancements of large language models (LLMs) in natural language processing have sparked discussions regarding whether these models possess capability of curiosity-driven learning akin to humans. In this paper, starting from the human curiosity assessment questionnaire Five-Dimensional Curiosity scale Revised (5DCR), we design a comprehensive evaluation framework that covers dimensions such as Information Seeking, Thrill Seeking, and Social Curiosity to assess the extent of curiosity exhibited by LLMs. The results demonstrate that LLMs exhibit a stronger thirst for knowledge than humans but still tend to make conservative choices when faced with uncertain environments. We further investigated the relationship between curiosity and thinking of LLMs, confirming that curious behaviors can enhance the model's reasoning and active learning abilities. These findings suggest that LLMs have the potential to...