[2603.04772] TSEmbed: Unlocking Task Scaling in Universal Multimodal Embeddings
About this article
Abstract page for arXiv paper 2603.04772: TSEmbed: Unlocking Task Scaling in Universal Multimodal Embeddings
Computer Science > Computation and Language arXiv:2603.04772 (cs) [Submitted on 5 Mar 2026] Title:TSEmbed: Unlocking Task Scaling in Universal Multimodal Embeddings Authors:Yebo Wu, Feng Liu, Ziwei Xie, Zhiyuan Liu, Changwang Zhang, Jun Wang, Li Li View a PDF of the paper titled TSEmbed: Unlocking Task Scaling in Universal Multimodal Embeddings, by Yebo Wu and 6 other authors View PDF HTML (experimental) Abstract:Despite the exceptional reasoning capabilities of Multimodal Large Language Models (MLLMs), their adaptation into universal embedding models is significantly impeded by task conflict. To address this, we propose TSEmbed, a universal multimodal embedding framework that synergizes Mixture-of-Experts (MoE) with Low-Rank Adaptation (LoRA) to explicitly disentangle conflicting task objectives. Moreover, we introduce Expert-Aware Negative Sampling (EANS), a novel strategy that leverages expert routing distributions as an intrinsic proxy for semantic similarity. By dynamically prioritizing informative hard negatives that share expert activation patterns with the query, EANS effectively sharpens the model's discriminative power and refines embedding boundaries. To ensure training stability, we further devise a two-stage learning paradigm that solidifies expert specialization before optimizing representations via EANS. TSEmbed achieves state-of-the-art performance on both the Massive Multimodal Embedding Benchmark (MMEB) and real-world industrial production datasets, layin...