[2502.12179] Sparse Shift Autoencoders for Identifying Concepts from Large Language Model Activations
About this article
Abstract page for arXiv paper 2502.12179: Sparse Shift Autoencoders for Identifying Concepts from Large Language Model Activations
Computer Science > Machine Learning arXiv:2502.12179 (cs) [Submitted on 14 Feb 2025 (v1), last revised 27 Feb 2026 (this version, v2)] Title:Sparse Shift Autoencoders for Identifying Concepts from Large Language Model Activations Authors:Shruti Joshi, Andrea Dittadi, Sébastien Lachapelle, Dhanya Sridhar View a PDF of the paper titled Sparse Shift Autoencoders for Identifying Concepts from Large Language Model Activations, by Shruti Joshi and 3 other authors View PDF Abstract:Unsupervised approaches to large language model (LLM) interpretability, such as sparse autoencoders (SAEs), offer a way to decode LLM activations into interpretable and, ideally, controllable concepts. On the one hand, these approaches alleviate the need for supervision from concept labels, paired prompts, or explicit causal models. On the other hand, without additional assumptions, SAEs are not guaranteed to be identifiable. In practice, they may learn latent dimensions that entangle multiple underlying concepts. If we use these dimensions to extract vectors for steering specific LLM behaviours, this non-identifiability might result in interventions that inadvertently affect unrelated properties. In this paper, we bring the question of identifiability to the forefront of LLM interpretability research. Specifically, we introduce Sparse Shift Autoencoders (SSAEs) which learn sparse representations of differences between embeddings rather than the embeddings themselves. Crucially, we show that SSAEs are ...