[2604.04902] Are Latent Reasoning Models Easily Interpretable?
About this article
Abstract page for arXiv paper 2604.04902: Are Latent Reasoning Models Easily Interpretable?
Computer Science > Machine Learning arXiv:2604.04902 (cs) [Submitted on 6 Apr 2026] Title:Are Latent Reasoning Models Easily Interpretable? Authors:Connor Dilgren, Sarah Wiegreffe View a PDF of the paper titled Are Latent Reasoning Models Easily Interpretable?, by Connor Dilgren and Sarah Wiegreffe View PDF HTML (experimental) Abstract:Latent reasoning models (LRMs) have attracted significant research interest due to their low inference cost (relative to explicit reasoning models) and theoretical ability to explore multiple reasoning paths in parallel. However, these benefits come at the cost of reduced interpretability: LRMs are difficult to monitor because they do not reason in natural language. This paper presents an investigation into LRM interpretability by examining two state-of-the-art LRMs. First, we find that latent reasoning tokens are often unnecessary for LRMs' predictions; on logical reasoning datasets, LRMs can almost always produce the same final answers without using latent reasoning at all. This underutilization of reasoning tokens may partially explain why LRMs do not consistently outperform explicit reasoning methods and raises doubts about the stated role of these tokens in prior work. Second, we demonstrate that when latent reasoning tokens are necessary for performance, we can decode gold reasoning traces up to 65-93% of the time for correctly predicted instances. This suggests LRMs often implement the expected solution rather than an uninterpretable ...