[2508.07697] Semantic-Enhanced Time-Series Forecasting via Large Language Models
About this article
Abstract page for arXiv paper 2508.07697: Semantic-Enhanced Time-Series Forecasting via Large Language Models
Computer Science > Machine Learning arXiv:2508.07697 (cs) [Submitted on 11 Aug 2025 (v1), last revised 2 Mar 2026 (this version, v5)] Title:Semantic-Enhanced Time-Series Forecasting via Large Language Models Authors:Hao Liu, Xiaoxing Zhang, Chun Yang, Xiaobin Zhu View a PDF of the paper titled Semantic-Enhanced Time-Series Forecasting via Large Language Models, by Hao Liu and Xiaoxing Zhang and Chun Yang and Xiaobin Zhu View PDF HTML (experimental) Abstract:Time series forecasting plays a significant role in finance, energy, meteorology, and IoT applications. Recent studies have leveraged the generalization capabilities of large language models (LLMs) to adapt to time series forecasting, achieving promising performance. However, existing studies focus on token-level modal alignment, instead of bridging the intrinsic modality gap between linguistic knowledge structures and time series data patterns, greatly limiting the semantic representation. To address this issue, we propose a novel Semantic-Enhanced LLM (SE-LLM) that explores the inherent periodicity and anomalous characteristics of time series to embed into the semantic space to enhance the token embedding. This process enhances the interpretability of tokens for LLMs, thereby activating the potential of LLMs for temporal sequence analysis. Moreover, existing Transformer-based LLMs excel at capturing long-range dependencies but are weak at modeling short-term anomalies in time-series data. Hence, we propose a plugin mo...