[2603.01348] UTICA: Multi-Objective Self-Distllation Foundation Model Pretraining for Time Series Classification
About this article
Abstract page for arXiv paper 2603.01348: UTICA: Multi-Objective Self-Distllation Foundation Model Pretraining for Time Series Classification
Computer Science > Machine Learning arXiv:2603.01348 (cs) [Submitted on 2 Mar 2026] Title:UTICA: Multi-Objective Self-Distllation Foundation Model Pretraining for Time Series Classification Authors:Yessin Moakher, Youssef Attia El Hili, Vasilii Feofanov View a PDF of the paper titled UTICA: Multi-Objective Self-Distllation Foundation Model Pretraining for Time Series Classification, by Yessin Moakher and 2 other authors View PDF HTML (experimental) Abstract:Self-supervised foundation models have achieved remarkable success across domains, including time series. However, the potential of non-contrastive methods, a paradigm that has driven significant advances in computer vision, remains underexplored for time series. In this work, we adapt DINOv2-style self-distillation to pretrain a time series foundation model, building on the Mantis tokenizer and transformer encoder architecture as our backbone. Through a student-teacher framework, our method Utica learns representations that capture both temporal invariance via augmented crops and fine-grained local structure via patch masking. Our approach achieves state-of-the-art classification performance on both UCR and UEA benchmarks. These results suggest that non-contrastive methods are a promising and complementary pretraining strategy for time series foundation models. Subjects: Machine Learning (cs.LG); Artificial Intelligence (cs.AI) Cite as: arXiv:2603.01348 [cs.LG] (or arXiv:2603.01348v1 [cs.LG] for this version) https...