[2603.20620] Reasoning Traces Shape Outputs but Models Won't Say So
About this article
Abstract page for arXiv paper 2603.20620: Reasoning Traces Shape Outputs but Models Won't Say So
Computer Science > Artificial Intelligence arXiv:2603.20620 (cs) [Submitted on 21 Mar 2026] Title:Reasoning Traces Shape Outputs but Models Won't Say So Authors:Yijie Hao, Lingjie Chen, Ali Emami, Joyce Ho View a PDF of the paper titled Reasoning Traces Shape Outputs but Models Won't Say So, by Yijie Hao and 3 other authors View PDF HTML (experimental) Abstract:Can we trust the reasoning traces that large reasoning models (LRMs) produce? We investigate whether these traces faithfully reflect what drives model outputs, and whether models will honestly report their influence. We introduce Thought Injection, a method that injects synthetic reasoning snippets into a model's <think> trace, then measures whether the model follows the injected reasoning and acknowledges doing so. Across 45,000 samples from three LRMs, we find that injected hints reliably alter outputs, confirming that reasoning traces causally shape model behavior. However, when asked to explain their changed answers, models overwhelmingly refuse to disclose the influence: overall non-disclosure exceeds 90% for extreme hints across 30,000 follow-up samples. Instead of acknowledging the injected reasoning, models fabricate aligned-appearing but unrelated explanations. Activation analysis reveals that sycophancy- and deception-related directions are strongly activated during these fabrications, suggesting systematic patterns rather than incidental failures. Our findings reveal a gap between the reasoning LRMs follo...