[2603.29467] M-MiniGPT4: Multilingual VLLM Alignment via Translated Data
About this article
Abstract page for arXiv paper 2603.29467: M-MiniGPT4: Multilingual VLLM Alignment via Translated Data
Computer Science > Computation and Language arXiv:2603.29467 (cs) [Submitted on 31 Mar 2026] Title:M-MiniGPT4: Multilingual VLLM Alignment via Translated Data Authors:Seung Hun Han, Youssef Mohamed, Mohamed Elhoseiny View a PDF of the paper titled M-MiniGPT4: Multilingual VLLM Alignment via Translated Data, by Seung Hun Han and 2 other authors View PDF HTML (experimental) Abstract:This paper presents a Multilingual Vision Large Language Model, named M-MiniGPT4. Our model exhibits strong vision-language understanding (VLU) capabilities across 11 languages. We utilize a mixture of native multilingual and translated data to push the multilingual VLU performance of the MiniGPT4 architecture. In addition, we propose a multilingual alignment training stage that uses parallel text corpora to further enhance the multilingual capabilities of our model. M-MiniGPT4 achieves 36% accuracy on the multilingual MMMU benchmark, outperforming state-of-the-art models in the same weight class, including foundation models released after the majority of this work was completed. We open-source our models, code, and translated datasets to facilitate future research in low-resource and multilingual settings. Comments: Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI) Cite as: arXiv:2603.29467 [cs.CL] (or arXiv:2603.29467v1 [cs.CL] for this version) https://doi.org/10.48550/arXiv.2603.29467 Focus to learn more arXiv-issued DOI via DataCite (pending registration) Relate...