[2602.17568] Be Wary of Your Time Series Preprocessing
Summary
This paper analyzes the impact of normalization strategies on Transformer-based models for time series representation learning, revealing that the choice of preprocessing can significantly affect model performance.
Why It Matters
Understanding the role of preprocessing in time series modeling is crucial for improving the performance of machine learning models. This research highlights the need for tailored normalization strategies, which can lead to better outcomes in various tasks and datasets, thus advancing the field of machine learning.
Key Takeaways
- Normalization strategies significantly influence model expressivity in time series tasks.
- No single normalization method consistently outperforms others across different tasks.
- Omitting normalization can sometimes yield better performance than applying it.
- A novel expressivity framework is proposed to quantify model capabilities.
- Empirical validation complements theoretical findings, emphasizing the need for task-specific preprocessing.
Computer Science > Machine Learning arXiv:2602.17568 (cs) [Submitted on 19 Feb 2026] Title:Be Wary of Your Time Series Preprocessing Authors:Sofiane Ennadir, Tianze Wang, Oleg Smirnov, Sahar Asadi, Lele Cao View a PDF of the paper titled Be Wary of Your Time Series Preprocessing, by Sofiane Ennadir and 4 other authors View PDF HTML (experimental) Abstract:Normalization and scaling are fundamental preprocessing steps in time series modeling, yet their role in Transformer-based models remains underexplored from a theoretical perspective. In this work, we present the first formal analysis of how different normalization strategies, specifically instance-based and global scaling, impact the expressivity of Transformer-based architectures for time series representation learning. We propose a novel expressivity framework tailored to time series, which quantifies a model's ability to distinguish between similar and dissimilar inputs in the representation space. Using this framework, we derive theoretical bounds for two widely used normalization methods: Standard and Min-Max scaling. Our analysis reveals that the choice of normalization strategy can significantly influence the model's representational capacity, depending on the task and data characteristics. We complement our theory with empirical validation on classification and forecasting benchmarks using multiple Transformer-based models. Our results show that no single normalization method consistently outperforms others, and ...