[2602.19585] Tri-Subspaces Disentanglement for Multimodal Sentiment Analysis
Summary
The paper presents a Tri-Subspace Disentanglement (TSD) framework for Multimodal Sentiment Analysis, enhancing representation by factoring features into three complementary subspaces.
Why It Matters
This research addresses limitations in existing multimodal sentiment analysis methods by introducing a novel framework that improves the expressiveness of sentiment representations. By effectively capturing shared signals among modalities, it advances the field and could lead to better applications in sentiment recognition across various platforms.
Key Takeaways
- Introduces Tri-Subspace Disentanglement (TSD) for sentiment analysis.
- Enhances multimodal representation by factoring features into three subspaces.
- Achieves state-of-the-art performance on CMU-MOSI and CMU-MOSEI datasets.
- Utilizes Subspace-Aware Cross-Attention (SACA) for better integration of information.
- Demonstrates effectiveness in multimodal intent recognition tasks.
Computer Science > Multimedia arXiv:2602.19585 (cs) [Submitted on 23 Feb 2026] Title:Tri-Subspaces Disentanglement for Multimodal Sentiment Analysis Authors:Chunlei Meng, Jiabin Luo, Zhenglin Yan, Zhenyu Yu, Rong Fu, Zhongxue Gan, Chun Ouyang View a PDF of the paper titled Tri-Subspaces Disentanglement for Multimodal Sentiment Analysis, by Chunlei Meng and 6 other authors View PDF HTML (experimental) Abstract:Multimodal Sentiment Analysis (MSA) integrates language, visual, and acoustic modalities to infer human sentiment. Most existing methods either focus on globally shared representations or modality-specific features, while overlooking signals that are shared only by certain modality pairs. This limits the expressiveness and discriminative power of multimodal representations. To address this limitation, we propose a Tri-Subspace Disentanglement (TSD) framework that explicitly factorizes features into three complementary subspaces: a common subspace capturing global consistency, submodally-shared subspaces modeling pairwise cross-modal synergies, and private subspaces preserving modality-specific cues. To keep these subspaces pure and independent, we introduce a decoupling supervisor together with structured regularization losses. We further design a Subspace-Aware Cross-Attention (SACA) fusion module that adaptively models and integrates information from the three subspaces to obtain richer and more robust representations. Experiments on CMU-MOSI and CMU-MOSEI demonstra...