[2604.01170] Online Reasoning Calibration: Test-Time Training Enables Generalizable Conformal LLM Reasoning
About this article
Abstract page for arXiv paper 2604.01170: Online Reasoning Calibration: Test-Time Training Enables Generalizable Conformal LLM Reasoning
Computer Science > Machine Learning arXiv:2604.01170 (cs) [Submitted on 1 Apr 2026] Title:Online Reasoning Calibration: Test-Time Training Enables Generalizable Conformal LLM Reasoning Authors:Cai Zhou, Zekai Wang, Menghua Wu, Qianyu Julie Zhu, Flora C. Shi, Chenyu Wang, Ashia Wilson, Tommi Jaakkola, Stephen Bates View a PDF of the paper titled Online Reasoning Calibration: Test-Time Training Enables Generalizable Conformal LLM Reasoning, by Cai Zhou and 8 other authors View PDF Abstract:While test-time scaling has enabled large language models to solve highly difficult tasks, state-of-the-art results come at exorbitant compute costs. These inefficiencies can be attributed to the miscalibration of post-trained language models, and the lack of calibration in popular sampling techniques. Here, we present Online Reasoning Calibration (ORCA), a framework for calibrating the sampling process that draws upon conformal prediction and test-time training. Specifically, we introduce a meta-learning procedure that updates the calibration module for each input. This allows us to provide valid confidence estimates under distributional shift, e.g. in thought patterns that occur across different stages of reasoning, or in prompt distributions between model development and deployment. ORCA not only provides theoretical guarantees on conformal risks, but also empirically shows higher efficiency and generalization across different reasoning tasks. At risk level $\delta=0.1$, ORCA improves Q...