[2604.03754] Testing the Limits of Truth Directions in LLMs
About this article
Abstract page for arXiv paper 2604.03754: Testing the Limits of Truth Directions in LLMs
Computer Science > Computation and Language arXiv:2604.03754 (cs) [Submitted on 4 Apr 2026] Title:Testing the Limits of Truth Directions in LLMs Authors:Angelos Poulis, Mark Crovella, Evimaria Terzi View a PDF of the paper titled Testing the Limits of Truth Directions in LLMs, by Angelos Poulis and 2 other authors View PDF HTML (experimental) Abstract:Large language models (LLMs) have been shown to encode truth of statements in their activation space along a linear truth direction. Previous studies have argued that these directions are universal in certain aspects, while more recent work has questioned this conclusion drawing on limited generalization across some settings. In this work, we identify a number of limits of truth-direction universality that have not been previously understood. We first show that truth directions are highly layer-dependent, and that a full understanding of universality requires probing at many layers in the model. We then show that truth directions depend heavily on task type, emerging in earlier layers for factual and later layers for reasoning tasks; they also vary in performance across levels of task complexity. Finally, we show that model instructions dramatically affect truth directions; simple correctness evaluation instructions significantly affect the generalization ability of truth probes. Our findings indicate that universality claims for truth directions are more limited than previously known, with significant differences observable ...