[2512.07988] HOLE: Homological Observation of Latent Embeddings for Neural Network Interpretability
About this article
Abstract page for arXiv paper 2512.07988: HOLE: Homological Observation of Latent Embeddings for Neural Network Interpretability
Computer Science > Machine Learning arXiv:2512.07988 (cs) [Submitted on 8 Dec 2025 (v1), last revised 6 Apr 2026 (this version, v3)] Title:HOLE: Homological Observation of Latent Embeddings for Neural Network Interpretability Authors:Sudhanva Manjunath Athreya, Paul Rosen View a PDF of the paper titled HOLE: Homological Observation of Latent Embeddings for Neural Network Interpretability, by Sudhanva Manjunath Athreya and Paul Rosen View PDF Abstract:Deep learning models have achieved remarkable success across various domains, yet their learned representations and decision-making processes remain largely opaque and hard to interpret. This work introduces HOLE (Homological Observation of Latent Embeddings), a method for analyzing and interpreting discriminative neural networks through persistent homology. HOLE extracts topological features from intermediate activations and presents them using a suite of visualization techniques, including cluster flow diagrams, blob graphs, and heatmap dendrograms. These tools facilitate the examination of representation structure and quality across layers. We evaluate HOLE using a range of discriminative models, focusing on representation quality, interpretability across layers, and robustness to input perturbations and model compression. The results indicate that topological analysis reveals patterns associated with class separation, feature disentanglement, and model robustness, providing a complementary perspective for understanding and...