Introducing ConTextual: How well can your Multimodal model jointly reason over text and image in text-rich scenes?
About this article
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
Back to Articles Introducing ConTextual: How well can your Multimodal model jointly reason over text and image in text-rich scenes? Published March 5, 2024 Update on GitHub Upvote 4 Rohan Wadhawan rohan598 Follow guest Hritik Bansal hbXNov Follow guest Kai-Wei Chang kaiweichang Follow guest NANYUN (Violet) PENG violetpeng Follow guest Clémentine Fourrier clefourrier Follow Models are becoming quite good at understanding text on its own, but what about text in images, which gives important contextual information? For example, navigating a map, or understanding a meme? The ability to reason about the interactions between the text and visual context in images can power many real-world applications, such as AI assistants, or tools to assist the visually impaired. We refer to these tasks as "context-sensitive text-rich visual reasoning tasks". At the moment, most evaluations of instruction-tuned large multimodal models (LMMs) focus on testing how well models can respond to human instructions posed as questions or imperative sentences (“Count this”, “List that”, etc) over images... but not how well they understand context-sensitive text-rich scenes! That’s why we (researchers from University of California Los Angeles) created ConTextual, a Context-sensitive Text-rich visuaL reasoning dataset for evaluating LMMs. We also released a leaderboard, so that the community can see for themselves which models are the best at this task. For an in-depth dive, you can also check these addit...