[2406.14045] LTSM-Bundle: A Toolbox and Benchmark on Large Language Models for Time Series Forecasting
Summary
The LTSM-Bundle introduces a comprehensive toolbox and benchmark for training Large Time Series Models (LTSMs), enhancing time series forecasting through modular design and empirical validation.
Why It Matters
This work addresses the significant challenges in time series forecasting by leveraging the advancements in large language models. By providing a structured approach to training LTSMs, it enhances the capability to handle diverse datasets, which is crucial for industries relying on accurate forecasting.
Key Takeaways
- LTSM-Bundle offers a modular toolbox for training LTSMs.
- It benchmarks various design choices for improved forecasting performance.
- Empirical results show superior performance over traditional methods.
- Focuses on addressing challenges of diverse time series data.
- Combines effective strategies for zero-shot and few-shot learning.
Computer Science > Machine Learning arXiv:2406.14045 (cs) [Submitted on 20 Jun 2024 (v1), last revised 13 Feb 2026 (this version, v3)] Title:LTSM-Bundle: A Toolbox and Benchmark on Large Language Models for Time Series Forecasting Authors:Yu-Neng Chuang, Songchen Li, Jiayi Yuan, Guanchu Wang, Kwei-Herng Lai, Joshua Han, Zihang Xu, Songyuan Sui, Leisheng Yu, Sirui Ding, Chia-Yuan Chang, Alfredo Costilla Reyes, Daochen Zha, Xia Hu View a PDF of the paper titled LTSM-Bundle: A Toolbox and Benchmark on Large Language Models for Time Series Forecasting, by Yu-Neng Chuang and 13 other authors View PDF HTML (experimental) Abstract:Time Series Forecasting (TSF) has long been a challenge in time series analysis. Inspired by the success of Large Language Models (LLMs), researchers are now developing Large Time Series Models (LTSMs)-universal transformer-based models that use autoregressive prediction-to improve TSF. However, training LTSMs on heterogeneous time series data poses unique challenges, including diverse frequencies, dimensions, and patterns across datasets. Recent endeavors have studied and evaluated various design choices aimed at enhancing LTSM training and generalization capabilities. However, these design choices are typically studied and evaluated in isolation and are not benchmarked collectively. In this work, we introduce LTSM-Bundle, a comprehensive toolbox, and benchmark for training LTSMs, spanning pre-processing techniques, model configurations, and dataset co...