[2603.19262] The α-Law of Observable Belief Revision in Large Language Model Inference
About this article
Abstract page for arXiv paper 2603.19262: The α-Law of Observable Belief Revision in Large Language Model Inference
Computer Science > Computation and Language arXiv:2603.19262 (cs) [Submitted on 26 Feb 2026] Title:The α-Law of Observable Belief Revision in Large Language Model Inference Authors:Mike Farmer, Abhinav Kochar, Yugyung Lee View a PDF of the paper titled The {\alpha}-Law of Observable Belief Revision in Large Language Model Inference, by Mike Farmer and 2 other authors View PDF HTML (experimental) Abstract:Large language models (LLMs) that iteratively revise their outputs through mechanisms such as chain-of-thought reasoning, self-reflection, or multi-agent debate lack principled guarantees regarding the stability of their probability updates. We identify a consistent multiplicative scaling law that governs how instruction-tuned LLMs revise probability assignments over candidate answers, expressed as a belief revision exponent that controls how prior beliefs and verification evidence are combined during updates. We show theoretically that values of the exponent below one are necessary and sufficient for asymptotic stability under repeated revision. Empirical evaluation across 4,975 problems spanning graduate-level benchmarks (GPQA Diamond, TheoremQA, MMLU-Pro, and ARC-Challenge) and multiple model families (GPT-5.2 and Claude Sonnet 4) reveals near-Bayesian update behavior, with models operating slightly above the stability boundary in single-step revisions. However, multi-step experiments demonstrate that the exponent decreases over successive revisions, producing contracti...