[2603.05414] Dissociating Direct Access from Inference in AI Introspection
About this article
Abstract page for arXiv paper 2603.05414: Dissociating Direct Access from Inference in AI Introspection
Computer Science > Artificial Intelligence arXiv:2603.05414 (cs) [Submitted on 5 Mar 2026] Title:Dissociating Direct Access from Inference in AI Introspection Authors:Harvey Lederman, Kyle Mahowald View a PDF of the paper titled Dissociating Direct Access from Inference in AI Introspection, by Harvey Lederman and Kyle Mahowald View PDF Abstract:Introspection is a foundational cognitive ability, but its mechanism is not well understood. Recent work has shown that AI models can introspect. We study their mechanism of introspection, first extensively replicating Lindsey et al. (2025)'s thought injection detection paradigm in large open-source models. We show that these models detect injected representations via two separable mechanisms: (i) probability-matching (inferring from perceived anomaly of the prompt) and (ii) direct access to internal states. The direct access mechanism is content-agnostic: models detect that an anomaly occurred but cannot reliably identify its semantic content. The two model classes we study confabulate injected concepts that are high-frequency and concrete (e.g., "apple'"); for them correct concept guesses typically require significantly more tokens. This content-agnostic introspective mechanism is consistent with leading theories in philosophy and psychology. Subjects: Artificial Intelligence (cs.AI); Computation and Language (cs.CL) Cite as: arXiv:2603.05414 [cs.AI] (or arXiv:2603.05414v1 [cs.AI] for this version) https://doi.org/10.48550/arX...