[2603.02899] Embedding interpretable $\ell_1$-regression into neural networks for uncovering temporal structure in cell imaging
About this article
Abstract page for arXiv paper 2603.02899: Embedding interpretable $\ell_1$-regression into neural networks for uncovering temporal structure in cell imaging
Computer Science > Machine Learning arXiv:2603.02899 (cs) [Submitted on 3 Mar 2026] Title:Embedding interpretable $\ell_1$-regression into neural networks for uncovering temporal structure in cell imaging Authors:Fabian Kabus, Maren Hackenberg, Julia Hindel, Thibault Cholvin, Antje Kilias, Thomas Brox, Abhinav Valada, Marlene Bartos, Harald Binder View a PDF of the paper titled Embedding interpretable $\ell_1$-regression into neural networks for uncovering temporal structure in cell imaging, by Fabian Kabus and Maren Hackenberg and Julia Hindel and Thibault Cholvin and Antje Kilias and Thomas Brox and Abhinav Valada and Marlene Bartos and Harald Binder View PDF HTML (experimental) Abstract:While artificial neural networks excel in unsupervised learning of non-sparse structure, classical statistical regression techniques offer better interpretability, in particular when sparseness is enforced by $\ell_1$ regularization, enabling identification of which factors drive observed dynamics. We investigate how these two types of approaches can be optimally combined, exemplarily considering two-photon calcium imaging data where sparse autoregressive dynamics are to be extracted. We propose embedding a vector autoregressive (VAR) model as an interpretable regression technique into a convolutional autoencoder, which provides dimension reduction for tractable temporal modeling. A skip connection separately addresses non-sparse static spatial information, selectively channeling sparse ...