[2603.25187] Probing the Lack of Stable Internal Beliefs in LLMs
About this article
Abstract page for arXiv paper 2603.25187: Probing the Lack of Stable Internal Beliefs in LLMs
Computer Science > Computation and Language arXiv:2603.25187 (cs) [Submitted on 26 Mar 2026] Title:Probing the Lack of Stable Internal Beliefs in LLMs Authors:Yifan Luo, Kangping Xu, Yanzhen Lu, Yang Yuan, Andrew Chi-Chih Yao View a PDF of the paper titled Probing the Lack of Stable Internal Beliefs in LLMs, by Yifan Luo and 4 other authors View PDF HTML (experimental) Abstract:Persona-driven large language models (LLMs) require consistent behavioral tendencies across interactions to simulate human-like personality traits, such as persistence or reliability. However, current LLMs often lack stable internal representations that anchor their responses over extended dialogues. This work explores whether LLMs can maintain "implicit consistency", defined as persistent adherence to an unstated goal in multi-turn interactions. We designed a 20-question-style riddle game paradigm where an LLM is tasked with secretly selecting a target and responding to users' guesses with "yes/no" answers. Through evaluations, we find that LLMs struggle to preserve latent consistency: their implicit "goals" shift across turns unless explicitly provided their selected target in context. These findings highlight critical limitations in the building of persona-driven LLMs and underscore the need for mechanisms that anchor implicit goals over time, which is a key to realistic personality modeling in interactive applications such as dialogue systems. Comments: Subjects: Computation and Language (cs.CL)...