[2602.14024] EIDOS: Latent-Space Predictive Learning for Time Series Foundation Models
Summary
The paper introduces EIDOS, a novel approach to time series modeling that focuses on latent-space predictive learning, enhancing the structure and coherence of latent representations.
Why It Matters
EIDOS addresses the limitations of traditional time series models that often capture noise rather than meaningful patterns. By shifting the focus to latent-space learning, it promises improved performance and reliability in time series forecasting, which is crucial for various applications in finance, healthcare, and beyond.
Key Takeaways
- EIDOS shifts pretraining from future value prediction to latent-space predictive learning.
- The model uses a causal Transformer to predict latent representation evolution.
- It integrates latent-space alignment and observational grounding for better performance.
- EIDOS mitigates structural fragmentation in representation space.
- Achieves state-of-the-art results on the GIFT-Eval benchmark.
Computer Science > Machine Learning arXiv:2602.14024 (cs) [Submitted on 15 Feb 2026] Title:EIDOS: Latent-Space Predictive Learning for Time Series Foundation Models Authors:Xinxing Zhou, Qingren Yao, Yiji Zhao, Chenghao Liu, Flora Salim, Xiaojie Yuan, Yanlong Wen, Ming Jin View a PDF of the paper titled EIDOS: Latent-Space Predictive Learning for Time Series Foundation Models, by Xinxing Zhou and 7 other authors View PDF HTML (experimental) Abstract:Most time series foundation models are pretrained by directly predicting future observations, which often yields weakly structured latent representations that capture surface noise rather than coherent and predictable temporal dynamics. In this work, we introduce EIDOS, a foundation model family that shifts pretraining from future value prediction to latent-space predictive learning. We train a causal Transformer to predict the evolution of latent representations, encouraging the emergence of structured and temporally coherent latent states. To ensure stable targets for latent-space learning, we design a lightweight aggregation branch to construct target representations. EIDOS is optimized via a joint objective that integrates latent-space alignment, observational grounding to anchor representations to the input signal, and direct forecasting supervision. On the GIFT-Eval benchmark, EIDOS mitigates structural fragmentation in the representation space and achieves state-of-the-art performance. These results demonstrate that cons...