[2512.12411] Detecting the Disturbance: A Nuanced View of Introspective Abilities in LLMs
About this article
Abstract page for arXiv paper 2512.12411: Detecting the Disturbance: A Nuanced View of Introspective Abilities in LLMs
Computer Science > Artificial Intelligence arXiv:2512.12411 (cs) [Submitted on 13 Dec 2025 (v1), last revised 1 Mar 2026 (this version, v2)] Title:Detecting the Disturbance: A Nuanced View of Introspective Abilities in LLMs Authors:Ely Hahami, Ishaan Sinha, Lavik Jain, Josh Kaplan, Jon Hahami View a PDF of the paper titled Detecting the Disturbance: A Nuanced View of Introspective Abilities in LLMs, by Ely Hahami and 4 other authors View PDF HTML (experimental) Abstract:Can large language models introspect, that is, accurately detect perturbations to their own internal states? We systematically investigate this question using activation steering in Meta-Llama-3.1-8B-Instruct. First, we show that the binary detection paradigm used in prior work conflates introspection with a methodological artifact: apparent detection accuracy is entirely explained by global logit shifts that bias models toward affirmative responses regardless of question content. However, on tasks requiring differential sensitivity, we find robust evidence for partial introspection: models localize which of 10 sentences received an injection at up to 88\% accuracy (vs.\ 10\% chance) and discriminate relative injection strengths at 83\% accuracy (vs.\ 50\% chance). These capabilities are confined to early-layer injections and collapse to chance thereafter -- a pattern we explain mechanistically through attention-based signal routing and residual stream recovery dynamics. Our findings demonstrate that LLMs c...