[2303.00320] TimeMAE: Self-Supervised Representations of Time Series with Decoupled Masked Autoencoders
About this article
Abstract page for arXiv paper 2303.00320: TimeMAE: Self-Supervised Representations of Time Series with Decoupled Masked Autoencoders
Computer Science > Machine Learning arXiv:2303.00320 (cs) [Submitted on 1 Mar 2023 (v1), last revised 27 Feb 2026 (this version, v4)] Title:TimeMAE: Self-Supervised Representations of Time Series with Decoupled Masked Autoencoders Authors:Mingyue Cheng, Xiaoyu Tao, Zhiding Liu, Qi Liu, Hao Zhang, Rujiao Zhang, Enhong Chen View a PDF of the paper titled TimeMAE: Self-Supervised Representations of Time Series with Decoupled Masked Autoencoders, by Mingyue Cheng and 6 other authors View PDF HTML (experimental) Abstract:Learning transferable representations from unlabeled time series is crucial for improving performance in data-scarce classification. Existing self-supervised methods often operate at the point level and rely on unidirectional encoding, leading to low semantic density and a mismatch between pre-training and downstream optimization. In this paper, we propose TimeMAE, a self-supervised framework that reformulates masked modeling for time series via semantic unit elevation and decoupled representation learning. Instead of modeling individual time steps, TimeMAE segments time series into non-overlapping sub-series to form semantically enriched units, enabling more informative masked reconstruction while reducing computational cost. To address the representation discrepancy introduced by masking, we design a decoupled masked autoencoder that separately encodes visible and masked regions, avoiding artificial masked tokens in the main encoder. To guide pre-training, we...