[2602.12756] Closing the Loop: A Control-Theoretic Framework for Provably Stable Time Series Forecasting with LLMs
Summary
This paper introduces F-LLM, a control-theoretic framework for stable time series forecasting using large language models, addressing issues of error propagation in traditional methods.
Why It Matters
As time series forecasting becomes increasingly important across various sectors, this research provides a novel approach that mitigates error accumulation, enhancing the reliability of predictions made by LLMs. The findings could significantly impact industries relying on accurate forecasting.
Key Takeaways
- F-LLM offers a closed-loop framework for time series forecasting.
- The proposed method addresses error propagation inherent in traditional autoregressive models.
- Theoretical guarantees ensure bounded error under specific conditions.
- Extensive experiments show improved performance on time series benchmarks.
- This research could influence future applications of LLMs in forecasting tasks.
Computer Science > Machine Learning arXiv:2602.12756 (cs) [Submitted on 13 Feb 2026] Title:Closing the Loop: A Control-Theoretic Framework for Provably Stable Time Series Forecasting with LLMs Authors:Xingyu Zhang, Hanyun Du, Zeen Song, Jianqi Zhang, Changwen Zheng, Wenwen Qiang View a PDF of the paper titled Closing the Loop: A Control-Theoretic Framework for Provably Stable Time Series Forecasting with LLMs, by Xingyu Zhang and 5 other authors View PDF HTML (experimental) Abstract:Large Language Models (LLMs) have recently shown exceptional potential in time series forecasting, leveraging their inherent sequential reasoning capabilities to model complex temporal dynamics. However, existing approaches typically employ a naive autoregressive generation strategy. We identify a critical theoretical flaw in this paradigm: during inference, the model operates in an open-loop manner, consuming its own generated outputs recursively. This leads to inevitable error accumulation (exposure bias), where minor early deviations cascade into significant trajectory drift over long horizons. In this paper, we reformulate autoregressive forecasting through the lens of control theory, proposing \textbf{F-LLM} (Feedback-driven LLM), a novel closed-loop framework. Unlike standard methods that passively propagate errors, F-LLM actively stabilizes the trajectory via a learnable residual estimator (Observer) and a feedback controller. Furthermore, we provide a theoretical guarantee that our clos...