[2505.22785] Navigating the Latent Space Dynamics of Neural Models
About this article
Abstract page for arXiv paper 2505.22785: Navigating the Latent Space Dynamics of Neural Models
Computer Science > Machine Learning arXiv:2505.22785 (cs) [Submitted on 28 May 2025 (v1), last revised 25 Mar 2026 (this version, v4)] Title:Navigating the Latent Space Dynamics of Neural Models Authors:Marco Fumero, Luca Moschella, Emanuele Rodolà, Francesco Locatello View a PDF of the paper titled Navigating the Latent Space Dynamics of Neural Models, by Marco Fumero and 3 other authors View PDF HTML (experimental) Abstract:Neural networks transform high-dimensional data into compact, structured representations, often modeled as elements of a lower dimensional latent space. In this paper, we present an alternative interpretation of neural models as dynamical systems acting on the latent manifold. Specifically, we show that autoencoder models implicitly define a latent vector field on the manifold, derived by iteratively applying the encoding-decoding map, without any additional training. We observe that standard training procedures introduce inductive biases that lead to the emergence of attractor points within this vector field. Drawing on this insight, we propose to leverage the vector field as a representation for the network, providing a novel tool to analyze the properties of the model and the data. This representation enables to: (i) analyze the generalization and memorization regimes of neural models, even throughout training; (ii) extract prior knowledge encoded in the network's parameters from the attractors, without requiring any input data; (iii) identify out-...