[2603.20170] Learning Dynamic Belief Graphs for Theory-of-mind Reasoning
About this article
Abstract page for arXiv paper 2603.20170: Learning Dynamic Belief Graphs for Theory-of-mind Reasoning
Computer Science > Artificial Intelligence arXiv:2603.20170 (cs) [Submitted on 20 Mar 2026] Title:Learning Dynamic Belief Graphs for Theory-of-mind Reasoning Authors:Ruxiao Chen, Xilei Zhao, Thomas J. Cova, Frank A. Drews, Susu Xu View a PDF of the paper titled Learning Dynamic Belief Graphs for Theory-of-mind Reasoning, by Ruxiao Chen and 4 other authors View PDF HTML (experimental) Abstract:Theory of Mind (ToM) reasoning with Large Language Models (LLMs) requires inferring how people's implicit, evolving beliefs shape what they seek and how they act under uncertainty -- especially in high-stakes settings such as disaster response, emergency medicine, and human-in-the-loop autonomy. Prior approaches either prompt LLMs directly or use latent-state models that treat beliefs as static and independent, often producing incoherent mental models over time and weak reasoning in dynamic contexts. We introduce a structured cognitive trajectory model for LLM-based ToM that represents mental state as a dynamic belief graph, jointly inferring latent beliefs, learning their time-varying dependencies, and linking belief evolution to information seeking and decisions. Our model contributes (i) a novel projection from textualized probabilistic statements to consistent probabilistic graphical model updates, (ii) an energy-based factor graph representation of belief interdependencies, and (iii) an ELBO-based objective that captures belief accumulation and delayed decisions. Across multiple ...