[2602.14111] Sanity Checks for Sparse Autoencoders: Do SAEs Beat Random Baselines?
Summary
This paper evaluates the effectiveness of Sparse Autoencoders (SAEs) in recovering meaningful features from neural networks, revealing significant shortcomings in their interpretability compared to random baselines.
Why It Matters
Understanding the limitations of Sparse Autoencoders is crucial for researchers and practitioners in machine learning, as it challenges the assumption that these models can effectively interpret neural network features. This insight can guide future research and model development.
Key Takeaways
- SAEs recover only 9% of true features despite high explained variance.
- SAEs perform similarly to random baselines in interpretability and causal editing tasks.
- Current SAE models may not reliably decompose neural network mechanisms.
Computer Science > Machine Learning arXiv:2602.14111 (cs) [Submitted on 15 Feb 2026] Title:Sanity Checks for Sparse Autoencoders: Do SAEs Beat Random Baselines? Authors:Anton Korznikov, Andrey Galichin, Alexey Dontsov, Oleg Rogov, Ivan Oseledets, Elena Tutubalina View a PDF of the paper titled Sanity Checks for Sparse Autoencoders: Do SAEs Beat Random Baselines?, by Anton Korznikov and 5 other authors View PDF HTML (experimental) Abstract:Sparse Autoencoders (SAEs) have emerged as a promising tool for interpreting neural networks by decomposing their activations into sparse sets of human-interpretable features. Recent work has introduced multiple SAE variants and successfully scaled them to frontier models. Despite much excitement, a growing number of negative results in downstream tasks casts doubt on whether SAEs recover meaningful features. To directly investigate this, we perform two complementary evaluations. On a synthetic setup with known ground-truth features, we demonstrate that SAEs recover only $9\%$ of true features despite achieving $71\%$ explained variance, showing that they fail at their core task even when reconstruction is strong. To evaluate SAEs on real activations, we introduce three baselines that constrain SAE feature directions or their activation patterns to random values. Through extensive experiments across multiple SAE architectures, we show that our baselines match fully-trained SAEs in interpretability (0.87 vs 0.90), sparse probing (0.69 vs 0...