Patch Time Series Transformer in Hugging Face
About this article
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
Back to Articles Patch Time Series Transformer in Hugging Face - Getting Started Published February 1, 2024 Update on GitHub Upvote 14 +8 Nam Nguyen namctin Follow guest Wesley M. Gifford wmgifford Follow guest Arindam Jati ajati Follow guest Vijay Ekambaram vijaye12 Follow guest Kashif Rasul kashif Follow In this blog, we provide examples of how to get started with PatchTST. We first demonstrate the forecasting capability of PatchTST on the Electricity data. We will then demonstrate the transfer learning capability of PatchTST by using the previously trained model to do zero-shot forecasting on the electrical transformer (ETTh1) dataset. The zero-shot forecasting performance will denote the test performance of the model in the target domain, without any training on the target domain. Subsequently, we will do linear probing and (then) finetuning of the pretrained model on the train part of the target data, and will validate the forecasting performance on the test part of the target data. The PatchTST model was proposed in A Time Series is Worth 64 Words: Long-term Forecasting with Transformers by Yuqi Nie, Nam H. Nguyen, Phanwadee Sinthong, Jayant Kalagnanam and presented at ICLR 2023. Quick overview of PatchTST At a high level, the model vectorizes individual time series in a batch into patches of a given size and encodes the resulting sequence of vectors via a Transformer that then outputs the prediction length forecast via an appropriate head. The model is based on two ...