[2603.01437] Decoding Answers Before Chain-of-Thought: Evidence from Pre-CoT Probes and Activation Steering
About this article
Abstract page for arXiv paper 2603.01437: Decoding Answers Before Chain-of-Thought: Evidence from Pre-CoT Probes and Activation Steering
Computer Science > Artificial Intelligence arXiv:2603.01437 (cs) [Submitted on 2 Mar 2026] Title:Decoding Answers Before Chain-of-Thought: Evidence from Pre-CoT Probes and Activation Steering Authors:Kyle Cox, Darius Kianersi, Adrià Garriga-Alonso View a PDF of the paper titled Decoding Answers Before Chain-of-Thought: Evidence from Pre-CoT Probes and Activation Steering, by Kyle Cox and 2 other authors View PDF HTML (experimental) Abstract:As chain-of-thought (CoT) has become central to scaling reasoning capabilities in large language models (LLMs), it has also emerged as a promising tool for interpretability, suggesting the opportunity to understand model decisions through verbalized reasoning. However, the utility of CoT toward interpretability depends upon its faithfulness -- whether the model's stated reasoning reflects the underlying decision process. We provide mechanistic evidence that instruction-tuned models often determine their answer before generating CoT. Training linear probes on residual stream activations at the last token before CoT, we can predict the model's final answer with 0.9 AUC on most tasks. We find that these directions are not only predictive, but also causal: steering activations along the probe direction flips model answers in over 50% of cases, significantly exceeding orthogonal baselines. When steering induces incorrect answers, we observe two distinct failure modes: non-entailment (stating correct premises but drawing unsupported conclusio...