[2603.05488] Reasoning Theater: Disentangling Model Beliefs from Chain-of-Thought
About this article
Abstract page for arXiv paper 2603.05488: Reasoning Theater: Disentangling Model Beliefs from Chain-of-Thought
Computer Science > Computation and Language arXiv:2603.05488 (cs) [Submitted on 5 Mar 2026] Title:Reasoning Theater: Disentangling Model Beliefs from Chain-of-Thought Authors:Siddharth Boppana, Annabel Ma, Max Loeffler, Raphael Sarfati, Eric Bigelow, Atticus Geiger, Owen Lewis, Jack Merullo View a PDF of the paper titled Reasoning Theater: Disentangling Model Beliefs from Chain-of-Thought, by Siddharth Boppana and 7 other authors View PDF HTML (experimental) Abstract:We provide evidence of performative chain-of-thought (CoT) in reasoning models, where a model becomes strongly confident in its final answer, but continues generating tokens without revealing its internal belief. Our analysis compares activation probing, early forced answering, and a CoT monitor across two large models (DeepSeek-R1 671B & GPT-OSS 120B) and find task difficulty-specific differences: The model's final answer is decodable from activations far earlier in CoT than a monitor is able to say, especially for easy recall-based MMLU questions. We contrast this with genuine reasoning in difficult multihop GPQA-Diamond questions. Despite this, inflection points (e.g., backtracking, 'aha' moments) occur almost exclusively in responses where probes show large belief shifts, suggesting these behaviors track genuine uncertainty rather than learned "reasoning theater." Finally, probe-guided early exit reduces tokens by up to 80% on MMLU and 30% on GPQA-Diamond with similar accuracy, positioning attention probin...