[2603.20508] Measuring Reasoning Trace Legibility: Can Those Who Understand Teach?
About this article
Abstract page for arXiv paper 2603.20508: Measuring Reasoning Trace Legibility: Can Those Who Understand Teach?
Computer Science > Multiagent Systems arXiv:2603.20508 (cs) [Submitted on 20 Mar 2026] Title:Measuring Reasoning Trace Legibility: Can Those Who Understand Teach? Authors:Dani Roytburg, Shreya Sridhar, Daphne Ippolito View a PDF of the paper titled Measuring Reasoning Trace Legibility: Can Those Who Understand Teach?, by Dani Roytburg and 2 other authors View PDF HTML (experimental) Abstract:Language models are increasingly being trained to "reason" before answering users' queries, outputting hundreds or even thousands of tokens worth of deliberation before their final answer. While the main intention of reasoning is to improve models' ability to arrive at a correct answer, we argue that these models should be assessed for the legibility of their reasoning traces in addition to the correctness of their final answers. In this paper, we evaluate 90k traces from 12 Reasoning Language Models (RLMs) for the quality of their reasoning traces. We introduce the concept of transfer utility, which assesses how useful an RLM's reasoning traces are for guiding a weaker, non-reasoning model toward arriving at the correct answer. We find that the reasoning traces of the highest-performing models rank among the lowest for legibility. Furthermore, we uncover tensions between efficiency-based measurements of legibility (such as trace length) and transfer utility. These tensions establish a legibility Pareto frontier, and we demonstrate that an RLM's ability to output highly legible traces ...