[2510.14949] DialectGen: Benchmarking and Improving Dialect Robustness in Multimodal Generation
About this article
Abstract page for arXiv paper 2510.14949: DialectGen: Benchmarking and Improving Dialect Robustness in Multimodal Generation
Computer Science > Computation and Language arXiv:2510.14949 (cs) [Submitted on 16 Oct 2025 (v1), last revised 7 Apr 2026 (this version, v3)] Title:DialectGen: Benchmarking and Improving Dialect Robustness in Multimodal Generation Authors:Yu Zhou, Sohyun An, Haikang Deng, Da Yin, Clark Peng, Cho-Jui Hsieh, Kai-Wei Chang, Nanyun Peng View a PDF of the paper titled DialectGen: Benchmarking and Improving Dialect Robustness in Multimodal Generation, by Yu Zhou and 7 other authors View PDF HTML (experimental) Abstract:Contact languages like English exhibit rich regional variations in the form of dialects, which are often used by dialect speakers interacting with generative models. However, can multimodal generative models effectively produce content given dialectal textual input? In this work, we study this question by constructing a new large-scale benchmark spanning six common English dialects. We work with dialect speakers to collect and verify over 4200 unique prompts and evaluate on 17 image and video generative models. Our automatic and human evaluation results show that current state-of-the-art multimodal generative models exhibit 32.26% to 48.17% performance degradation when a single dialect word is used in the prompt. Common mitigation methods such as fine-tuning and prompt rewriting can only improve dialect performance by small margins (< 7%), while potentially incurring significant performance degradation in Standard American English (SAE). To this end, we design a g...