[2603.20276] Me, Myself, and $π$ : Evaluating and Explaining LLM Introspection
About this article
Abstract page for arXiv paper 2603.20276: Me, Myself, and $π$ : Evaluating and Explaining LLM Introspection
Computer Science > Artificial Intelligence arXiv:2603.20276 (cs) [Submitted on 17 Mar 2026] Title:Me, Myself, and $π$ : Evaluating and Explaining LLM Introspection Authors:Atharv Naphade, Samarth Bhargav, Sean Lim, Mcnair Shah View a PDF of the paper titled Me, Myself, and $\pi$ : Evaluating and Explaining LLM Introspection, by Atharv Naphade and 3 other authors View PDF HTML (experimental) Abstract:A hallmark of human intelligence is Introspection-the ability to assess and reason about one's own cognitive processes. Introspection has emerged as a promising but contested capability in large language models (LLMs). However, current evaluations often fail to distinguish genuine meta-cognition from the mere application of general world knowledge or text-based self-simulation. In this work, we propose a principled taxonomy that formalizes introspection as the latent computation of specific operators over a model's policy and parameters. To isolate the components of generalized introspection, we present Introspect-Bench, a multifaceted evaluation suite designed for rigorous capability testing. Our results show that frontier models exhibit privileged access to their own policies, outperforming peer models in predicting their own behavior. Furthermore, we provide causal, mechanistic evidence explaining both how LLMs learn to introspect without explicit training, and how the mechanism of introspection emerges via attention diffusion. Comments: Subjects: Artificial Intelligence (cs...