[2511.00810] GUI-AIMA: Aligning Intrinsic Multimodal Attention with a Context Anchor for GUI Grounding
About this article
Abstract page for arXiv paper 2511.00810: GUI-AIMA: Aligning Intrinsic Multimodal Attention with a Context Anchor for GUI Grounding
Computer Science > Computer Vision and Pattern Recognition arXiv:2511.00810 (cs) [Submitted on 2 Nov 2025 (v1), last revised 27 Mar 2026 (this version, v3)] Title:GUI-AIMA: Aligning Intrinsic Multimodal Attention with a Context Anchor for GUI Grounding Authors:Shijie Zhou, Viet Dac Lai, Hao Tan, Jihyung Kil, Wanrong Zhu, Changyou Chen, Ruiyi Zhang View a PDF of the paper titled GUI-AIMA: Aligning Intrinsic Multimodal Attention with a Context Anchor for GUI Grounding, by Shijie Zhou and 6 other authors View PDF HTML (experimental) Abstract:Graphical user interface (GUI) grounding is a key capability for computer-use agents, mapping natural-language instructions to actionable regions on the screen. Existing Multimodal Large Language Model (MLLM) approaches typically formulate GUI grounding as a text-based coordinate generation task. However, directly generating precise coordinates from visual inputs is challenging and often data-intensive. A more intuitive strategy is to first identify instruction-relevant visual patches and then determine the exact click location within them. Motivated by recent observations that general MLLMs exhibit native grounding ability embedded in their attention maps, we propose GUI-AIMA, an attention-based and coordinate-free supervised fine-tuning framework for efficient GUI grounding. GUI-AIMA aligns the intrinsic multimodal attention of MLLMs with patch-wise grounding signals. These signals are calculated adaptively for diverse user instructions...