[2604.01161] Reasoning Shift: How Context Silently Shortens LLM Reasoning
About this article
Abstract page for arXiv paper 2604.01161: Reasoning Shift: How Context Silently Shortens LLM Reasoning
Computer Science > Machine Learning arXiv:2604.01161 (cs) [Submitted on 1 Apr 2026] Title:Reasoning Shift: How Context Silently Shortens LLM Reasoning Authors:Gleb Rodionov View a PDF of the paper titled Reasoning Shift: How Context Silently Shortens LLM Reasoning, by Gleb Rodionov View PDF Abstract:Large language models (LLMs) exhibiting test-time scaling behavior, such as extended reasoning traces and self-verification, have demonstrated remarkable performance on complex, long-term reasoning tasks. However, the robustness of these reasoning behaviors remains underexplored. To investigate this, we conduct a systematic evaluation of multiple reasoning models across three scenarios: (1) problems augmented with lengthy, irrelevant context; (2) multi-turn conversational settings with independent tasks; and (3) problems presented as a subtask within a complex task. We observe an interesting phenomenon: reasoning models tend to produce much shorter reasoning traces (up to 50%) for the same problem under different context conditions compared to the traces produced when the problem is presented in isolation. A finer-grained analysis reveals that this compression is associated with a decrease in self-verification and uncertainty management behaviors, such as double-checking. While this behavioral shift does not compromise performance on straightforward problems, it might affect performance on more challenging tasks. We hope our findings draw additional attention to both the robust...