[2503.01013] TimeXL: Explainable Multi-modal Time Series Prediction with LLM-in-the-Loop
About this article
Abstract page for arXiv paper 2503.01013: TimeXL: Explainable Multi-modal Time Series Prediction with LLM-in-the-Loop
Computer Science > Machine Learning arXiv:2503.01013 (cs) [Submitted on 2 Mar 2025 (v1), last revised 22 Mar 2026 (this version, v4)] Title:TimeXL: Explainable Multi-modal Time Series Prediction with LLM-in-the-Loop Authors:Yushan Jiang, Wenchao Yu, Geon Lee, Dongjin Song, Kijung Shin, Wei Cheng, Yanchi Liu, Haifeng Chen View a PDF of the paper titled TimeXL: Explainable Multi-modal Time Series Prediction with LLM-in-the-Loop, by Yushan Jiang and 7 other authors View PDF HTML (experimental) Abstract:Time series analysis provides essential insights for real-world system dynamics and informs downstream decision-making, yet most existing methods often overlook the rich contextual signals present in auxiliary modalities. To bridge this gap, we introduce TimeXL, a multi-modal prediction framework that integrates a prototype-based time series encoder with three collaborating Large Language Models (LLMs) to deliver more accurate predictions and interpretable explanations. First, a multi-modal prototype-based encoder processes both time series and textual inputs to generate preliminary forecasts alongside case-based rationales. These outputs then feed into a prediction LLM, which refines the forecasts by reasoning over the encoder's predictions and explanations. Next, a reflection LLM compares the predicted values against the ground truth, identifying textual inconsistencies or noise. Guided by this feedback, a refinement LLM iteratively enhances text quality and triggers encoder ...