[2303.00320] TimeMAE: Self-Supervised Representations of Time Series with Decoupled Masked Autoencoders

[2303.00320] TimeMAE: Self-Supervised Representations of Time Series with Decoupled Masked Autoencoders

arXiv - Machine Learning 4 min read

About this article

Abstract page for arXiv paper 2303.00320: TimeMAE: Self-Supervised Representations of Time Series with Decoupled Masked Autoencoders

Computer Science > Machine Learning arXiv:2303.00320 (cs) [Submitted on 1 Mar 2023 (v1), last revised 27 Feb 2026 (this version, v4)] Title:TimeMAE: Self-Supervised Representations of Time Series with Decoupled Masked Autoencoders Authors:Mingyue Cheng, Xiaoyu Tao, Zhiding Liu, Qi Liu, Hao Zhang, Rujiao Zhang, Enhong Chen View a PDF of the paper titled TimeMAE: Self-Supervised Representations of Time Series with Decoupled Masked Autoencoders, by Mingyue Cheng and 6 other authors View PDF HTML (experimental) Abstract:Learning transferable representations from unlabeled time series is crucial for improving performance in data-scarce classification. Existing self-supervised methods often operate at the point level and rely on unidirectional encoding, leading to low semantic density and a mismatch between pre-training and downstream optimization. In this paper, we propose TimeMAE, a self-supervised framework that reformulates masked modeling for time series via semantic unit elevation and decoupled representation learning. Instead of modeling individual time steps, TimeMAE segments time series into non-overlapping sub-series to form semantically enriched units, enabling more informative masked reconstruction while reducing computational cost. To address the representation discrepancy introduced by masking, we design a decoupled masked autoencoder that separately encodes visible and masked regions, avoiding artificial masked tokens in the main encoder. To guide pre-training, we...

Originally published on March 02, 2026. Curated by AI News.

Related Articles

Machine Learning

I tried building a memory-first AI… and ended up discovering smaller models can beat larger ones

Dataset Model Acc F1 Δ vs Log Δ vs Static Avg Params Peak Params Steps Infer ms Size Banking77-20 Logistic TF-IDF 92.37% 0.9230 +0.00pp +...

Reddit - Artificial Intelligence · 1 min ·
Llms

[D] Howcome Muon is only being used for Transformers?

Muon has quickly been adopted in LLM training, yet we don't see it being talked about in other contexts. Searches for Muon on ConvNets tu...

Reddit - Machine Learning · 1 min ·
Machine Learning

[P] Run Karpathy's Autoresearch for $0.44 instead of $24 — Open-source parallel evolution pipeline on SageMaker Spot

TL;DR: I built an open-source pipeline that runs Karpathy's autoresearch on SageMaker Spot instances — 25 autonomous ML experiments for $...

Reddit - Machine Learning · 1 min ·
Improving AI models’ ability to explain their predictions
Machine Learning

Improving AI models’ ability to explain their predictions

AI News - General · 9 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime