[2604.04145] Solar-VLM: Multimodal Vision-Language Models for Augmented Solar Power Forecasting
About this article
Abstract page for arXiv paper 2604.04145: Solar-VLM: Multimodal Vision-Language Models for Augmented Solar Power Forecasting
Computer Science > Artificial Intelligence arXiv:2604.04145 (cs) [Submitted on 5 Apr 2026] Title:Solar-VLM: Multimodal Vision-Language Models for Augmented Solar Power Forecasting Authors:Hang Fan, Haoran Pei, Runze Liang, Weican Liu, Long Cheng, Wei Wei View a PDF of the paper titled Solar-VLM: Multimodal Vision-Language Models for Augmented Solar Power Forecasting, by Hang Fan and 5 other authors View PDF HTML (experimental) Abstract:Photovoltaic (PV) power forecasting plays a critical role in power system dispatch and market participation. Because PV generation is highly sensitive to weather conditions and cloud motion, accurate forecasting requires effective modeling of complex spatiotemporal dependencies across multiple information sources. Although recent studies have advanced AI-based forecasting methods, most fail to fuse temporal observations, satellite imagery, and textual weather information in a unified framework. This paper proposes Solar-VLM, a large-language-model-driven framework for multimodal PV power forecasting. First, modality-specific encoders are developed to extract complementary features from heterogeneous inputs. The time-series encoder adopts a patch-based design to capture temporal patterns from multivariate observations at each site. The visual encoder, built upon a Qwen-based vision backbone, extracts cloud-cover information from satellite images. The text encoder distills historical weather characteristics from textual descriptions. Second, t...