[2602.04369] Multi-scale hypergraph meets LLMs: Aligning large language models for time series analysis
About this article
Abstract page for arXiv paper 2602.04369: Multi-scale hypergraph meets LLMs: Aligning large language models for time series analysis
Computer Science > Machine Learning arXiv:2602.04369 (cs) [Submitted on 4 Feb 2026 (v1), last revised 2 Mar 2026 (this version, v2)] Title:Multi-scale hypergraph meets LLMs: Aligning large language models for time series analysis Authors:Zongjiang Shang, Dongliang Cui, Binqing Wu, Ling Chen View a PDF of the paper titled Multi-scale hypergraph meets LLMs: Aligning large language models for time series analysis, by Zongjiang Shang and 3 other authors View PDF HTML (experimental) Abstract:Recently, there has been great success in leveraging pre-trained large language models (LLMs) for time series analysis. The core idea lies in effectively aligning the modality between natural language and time series. However, the multi-scale structures of natural language and time series have not been fully considered, resulting in insufficient utilization of LLMs capabilities. To this end, we propose MSH-LLM, a Multi-Scale Hypergraph method that aligns Large Language Models for time series analysis. Specifically, a hyperedging mechanism is designed to enhance the multi-scale semantic information of time series semantic space. Then, a cross-modality alignment (CMA) module is introduced to align the modality between natural language and time series at different scales. In addition, a mixture of prompts (MoP) mechanism is introduced to provide contextual information and enhance the ability of LLMs to understand the multi-scale temporal patterns of time series. Experimental results on 27 real...