[2603.22372] Rethinking Multimodal Fusion for Time Series: Auxiliary Modalities Need Constrained Fusion
About this article
Abstract page for arXiv paper 2603.22372: Rethinking Multimodal Fusion for Time Series: Auxiliary Modalities Need Constrained Fusion
Computer Science > Machine Learning arXiv:2603.22372 (cs) [Submitted on 23 Mar 2026] Title:Rethinking Multimodal Fusion for Time Series: Auxiliary Modalities Need Constrained Fusion Authors:Seunghan Lee, Jun Seo, Jaehoon Lee, Sungdong Yoo, Minjae Kim, Tae Yoon Lim, Dongwan Kang, Hwanil Choi, SoonYoung Lee, Wonbin Ahn View a PDF of the paper titled Rethinking Multimodal Fusion for Time Series: Auxiliary Modalities Need Constrained Fusion, by Seunghan Lee and 9 other authors View PDF Abstract:Recent advances in multimodal learning have motivated the integration of auxiliary modalities such as text or vision into time series (TS) forecasting. However, most existing methods provide limited gains, often improving performance only in specific datasets or relying on architecture-specific designs that limit generalization. In this paper, we show that multimodal models with naive fusion strategies (e.g., simple addition or concatenation) often underperform unimodal TS models, which we attribute to the uncontrolled integration of auxiliary modalities which may introduce irrelevant information. Motivated by this observation, we explore various constrained fusion methods designed to control such integration and find that they consistently outperform naive fusion methods. Furthermore, we propose Controlled Fusion Adapter (CFA), a simple plug-in method that enables controlled cross-modal interactions without modifying the TS backbone, integrating only relevant textual information aligne...