[2511.10788] From Efficiency to Adaptivity: A Deeper Look at Adaptive Reasoning in Large Language Models
About this article
Abstract page for arXiv paper 2511.10788: From Efficiency to Adaptivity: A Deeper Look at Adaptive Reasoning in Large Language Models
Computer Science > Artificial Intelligence arXiv:2511.10788 (cs) [Submitted on 13 Nov 2025 (v1), last revised 1 Mar 2026 (this version, v2)] Title:From Efficiency to Adaptivity: A Deeper Look at Adaptive Reasoning in Large Language Models Authors:Chao Wu, Baoheng Li, Mingchen Gao, Yu Tian, Zhenyi Wang View a PDF of the paper titled From Efficiency to Adaptivity: A Deeper Look at Adaptive Reasoning in Large Language Models, by Chao Wu and 4 other authors View PDF HTML (experimental) Abstract:Recent advances in large language models (LLMs) have made reasoning a central benchmark for evaluating intelligence. While prior surveys focus on efficiency by examining how to shorten reasoning chains or reduce computation, this view overlooks a fundamental challenge: current LLMs apply uniform reasoning strategies regardless of task complexity, generating long traces for trivial problems while failing to extend reasoning for difficult tasks. This survey reframes reasoning through the lens of {adaptivity}: the capability to allocate reasoning effort based on input characteristics such as difficulty and uncertainty. We make three contributions. First, we formalize deductive, inductive, and abductive reasoning within the LLM context, connecting these classical cognitive paradigms with their algorithmic realizations. Second, we formalize adaptive reasoning as a control-augmented policy optimization problem balancing task performance with computational cost, distinguishing learned policies...