[2603.27942] JaWildText: A Benchmark for Vision-Language Models on Japanese Scene Text Understanding
About this article
Abstract page for arXiv paper 2603.27942: JaWildText: A Benchmark for Vision-Language Models on Japanese Scene Text Understanding
Computer Science > Computer Vision and Pattern Recognition arXiv:2603.27942 (cs) [Submitted on 30 Mar 2026] Title:JaWildText: A Benchmark for Vision-Language Models on Japanese Scene Text Understanding Authors:Koki Maeda (1 and 2), Naoaki Okazaki (1 and 2) ((1) Institute of Science Tokyo, Tokyo, Japan, (2) Research and Development Center for Large Language Models, National Institute of Informatics, Tokyo, Japan) View a PDF of the paper titled JaWildText: A Benchmark for Vision-Language Models on Japanese Scene Text Understanding, by Koki Maeda (1 and 2) and 7 other authors View PDF HTML (experimental) Abstract:Japanese scene text poses challenges that multilingual benchmarks often fail to capture, including mixed scripts, frequent vertical writing, and a character inventory far larger than the Latin alphabet. Although Japanese is included in several multilingual benchmarks, these resources do not adequately capture the language-specific complexities. Meanwhile, existing Japanese visual text datasets have primarily focused on scanned documents, leaving in-the-wild scene text underexplored. To fill this gap, we introduce JaWildText, a diagnostic benchmark for evaluating vision-language models (VLMs) on Japanese scene text understanding. JaWildText contains 3,241 instances from 2,961 images newly captured in Japan, with 1.12 million annotated characters spanning 3,643 unique character types. It comprises three complementary tasks that vary in visual organization, output forma...