[2602.15676] Relative Geometry of Neural Forecasters: Linking Accuracy and Alignment in Learned Latent Geometry
Summary
This paper explores how neural networks represent latent geometry in forecasting complex dynamical systems, linking model alignment with accuracy through a new framework.
Why It Matters
Understanding the internal representations of neural networks is crucial for improving their forecasting capabilities. This research provides a foundational framework for comparing different model families, which can enhance the development of more accurate and reliable AI systems in various applications.
Key Takeaways
- Introduces anchor-based, geometry-agnostic relative embeddings for neural networks.
- Demonstrates that alignment correlates with forecasting accuracy but high accuracy can occur with low alignment.
- Reveals reproducible structure in how different neural network families internalize dynamical systems.
Computer Science > Machine Learning arXiv:2602.15676 (cs) [Submitted on 17 Feb 2026] Title:Relative Geometry of Neural Forecasters: Linking Accuracy and Alignment in Learned Latent Geometry Authors:Deniz Kucukahmetler, Maximilian Jean Hemmann, Julian Mosig von Aehrenfeld, Maximilian Amthor, Christian Deubel, Nico Scherf, Diaaeldin Taha View a PDF of the paper titled Relative Geometry of Neural Forecasters: Linking Accuracy and Alignment in Learned Latent Geometry, by Deniz Kucukahmetler and 6 other authors View PDF HTML (experimental) Abstract:Neural networks can accurately forecast complex dynamical systems, yet how they internally represent underlying latent geometry remains poorly understood. We study neural forecasters through the lens of representational alignment, introducing anchor-based, geometry-agnostic relative embeddings that remove rotational and scaling ambiguities in latent spaces. Applying this framework across seven canonical dynamical systems - ranging from periodic to chaotic - we reveal reproducible family-level structure: multilayer perceptrons align with other MLPs, recurrent networks with RNNs, while transformers and echo-state networks achieve strong forecasts despite weaker alignment. Alignment generally correlates with forecasting accuracy, yet high accuracy can coexist with low alignment. Relative geometry thus provides a simple, reproducible foundation for comparing how model families internalize and represent dynamical structure. Comments: Subj...