[2509.03738] Mechanistic Interpretability with Sparse Autoencoder Neural Operators
Summary
This article introduces Sparse Autoencoder Neural Operators (SAE-NOs), a novel approach in machine learning that enhances interpretability and generalization by operating in infinite-dimensional function spaces.
Why It Matters
The development of SAE-NOs represents a significant advancement in machine learning, as it allows for better concept learning and robustness in data representation. This is crucial for applications requiring high interpretability and adaptability to varying data distributions, making it relevant for researchers and practitioners in AI and machine learning.
Key Takeaways
- SAE-NOs extend traditional sparse autoencoders to infinite-dimensional function spaces.
- They improve concept learning and generalization across different data resolutions.
- The functional representation hypothesis enhances interpretability compared to fixed-dimensional models.
- SAE-FNOs demonstrate better efficiency in concept utilization and robustness to distribution shifts.
- Parameterization is key to understanding the underlying structure of data.
Computer Science > Machine Learning arXiv:2509.03738 (cs) [Submitted on 3 Sep 2025 (v1), last revised 23 Feb 2026 (this version, v3)] Title:Mechanistic Interpretability with Sparse Autoencoder Neural Operators Authors:Bahareh Tolooshams, Ailsa Shen, Anima Anandkumar View a PDF of the paper titled Mechanistic Interpretability with Sparse Autoencoder Neural Operators, by Bahareh Tolooshams and 2 other authors View PDF HTML (experimental) Abstract:We introduce sparse autoencoder neural operators (SAE-NOs), a new class of sparse autoencoders that operate directly in infinite-dimensional function spaces. We generalize the linear representation hypothesis to a functional representation hypothesis, enabling concept learning beyond vector-valued representations. Unlike standard SAEs that employ multi-layer perceptrons (SAE-MLP) to each concept with a scalar activation, we introduce and formalize sparse autoencoder neural operators (SAE-NOs), which extend vector-valued representations to functional ones. We instantiate this framework as SAE Fourier neural operators (SAE-FNOs), parameterizing concepts as integral operators in the Fourier domain. We show that this functional parameterization fundamentally shapes learned concepts, leading to improved stability with respect to sparsity level, robustness to distribution shifts, and generalization across discretizations. We show that SAE-FNO is more efficient in concept utilization across data population and more effective in extracting ...