[2508.11999] MOON: Generative MLLM-based Multimodal Representation Learning for E-commerce Product Understanding
About this article
Abstract page for arXiv paper 2508.11999: MOON: Generative MLLM-based Multimodal Representation Learning for E-commerce Product Understanding
Computer Science > Computer Vision and Pattern Recognition arXiv:2508.11999 (cs) [Submitted on 16 Aug 2025 (v1), last revised 28 Feb 2026 (this version, v5)] Title:MOON: Generative MLLM-based Multimodal Representation Learning for E-commerce Product Understanding Authors:Daoze Zhang, Chenghan Fu, Zhanheng Nie, Jianyu Liu, Wanxian Guan, Yuan Gao, Jun Song, Pengjie Wang, Jian Xu, Bo Zheng View a PDF of the paper titled MOON: Generative MLLM-based Multimodal Representation Learning for E-commerce Product Understanding, by Daoze Zhang and 9 other authors View PDF HTML (experimental) Abstract:With the rapid advancement of e-commerce, exploring general representations rather than task-specific ones has attracted increasing research attention. For product understanding, although existing discriminative dual-flow architectures drive progress in this field, they inherently struggle to model the many-to-one alignment between multiple images and texts of products. Therefore, we argue that generative Multimodal Large Language Models (MLLMs) hold significant potential for improving product representation learning. Nevertheless, achieving this goal still remains non-trivial due to several key challenges: the lack of multimodal and aspect-aware modeling modules in typical LLMs; the common presence of background noise in product images; and the absence of a standard benchmark for evaluation. To address these issues, we propose the first generative MLLM-based model named MOON for product r...