[2602.22938] pMoE: Prompting Diverse Experts Together Wins More in Visual Adaptation
Summary
The paper presents pMoE, a novel Mixture-of-Experts prompt tuning method that enhances visual adaptation by integrating diverse domain knowledge through expert-specific prompt tokens.
Why It Matters
This research addresses the limitations of traditional prompt tuning methods by leveraging multiple expert domains, which can significantly improve model performance in visual tasks. The findings are relevant for advancing machine learning applications in both general and specialized fields, such as medical imaging.
Key Takeaways
- pMoE enhances visual adaptation by integrating diverse expert knowledge.
- The method introduces expert-specific prompt tokens and a dynamic dispatching mechanism.
- Extensive experiments show significant performance improvements across 47 tasks.
- pMoE balances computational efficiency with adaptation effectiveness.
- The approach is applicable to both general and medical domains.
Computer Science > Computer Vision and Pattern Recognition arXiv:2602.22938 (cs) [Submitted on 26 Feb 2026] Title:pMoE: Prompting Diverse Experts Together Wins More in Visual Adaptation Authors:Shentong Mo, Xufang Luo, Dongsheng Li View a PDF of the paper titled pMoE: Prompting Diverse Experts Together Wins More in Visual Adaptation, by Shentong Mo and 2 other authors View PDF HTML (experimental) Abstract:Parameter-efficient fine-tuning has demonstrated promising results across various visual adaptation tasks, such as classification and segmentation. Typically, prompt tuning techniques have harnessed knowledge from a single pre-trained model, whether from a general or a specialized medical domain. However, this approach typically overlooks the potential synergies that could arise from integrating diverse domain knowledge within the same tuning process. In this work, we propose a novel Mixture-of-Experts prompt tuning method called pMoE, which leverages the strengths of multiple expert domains through expert-specialized prompt tokens and the learnable dispatcher, effectively combining their expertise in a unified model framework. Our pMoE introduces expert-specific prompt tokens and utilizes a dynamic token dispatching mechanism at various prompt layers to optimize the contribution of each domain expert during the adaptation phase. By incorporating both domain knowledge from diverse experts, the proposed pMoE significantly enhances the model's versatility and applicability ...