[2605.07353] Confidence-Aware Alignment Makes Reasoning LLMs More Reliable
About this article
Abstract page for arXiv paper 2605.07353: Confidence-Aware Alignment Makes Reasoning LLMs More Reliable
Computer Science > Artificial Intelligence arXiv:2605.07353 (cs) [Submitted on 8 May 2026] Title:Confidence-Aware Alignment Makes Reasoning LLMs More Reliable Authors:Kejia Chen, Jiawen Zhang, Yihong Wu, Kewei Gao, Jian Lou, Zunlei Feng, Mingli Song, Ruoxi Jia View a PDF of the paper titled Confidence-Aware Alignment Makes Reasoning LLMs More Reliable, by Kejia Chen and 7 other authors View PDF HTML (experimental) Abstract:Large reasoning models often reach correct answers through flawed intermediate steps, creating a gap between final accuracy and reasoning reliability. Existing alignment strategies address this with external verifiers or massive sampling, limiting scalability. In this work, we introduce CASPO (Confidence-Aware Step-wise Preference Optimization), a framework that aligns token-level confidence with step-wise logical correctness through iterative Direct Preference Optimization, without training a separate reward model. During inference, we propose Confidence-aware Thought (CaT), which leverages this calibrated confidence to dynamically prune uncertain reasoning branches with negligible O(V) latency. Experiments across ten benchmarks and multiple model families show that CASPO consistently improves reasoning reliability and inference efficiency. CASPO scales to Qwen3-8B-Base and surpasses tree-search baselines on AIME'24 and AIME'25 without using reward-model data. We also release a step-wise dataset with confidence annotations to support fine-grained analys...