[2509.13615] See, Think, Act: Teaching Multimodal Agents to Effectively Interact with GUI by Identifying Toggles
About this article
Abstract page for arXiv paper 2509.13615: See, Think, Act: Teaching Multimodal Agents to Effectively Interact with GUI by Identifying Toggles
Computer Science > Artificial Intelligence arXiv:2509.13615 (cs) [Submitted on 17 Sep 2025 (v1), last revised 28 Feb 2026 (this version, v2)] Title:See, Think, Act: Teaching Multimodal Agents to Effectively Interact with GUI by Identifying Toggles Authors:Zongru Wu, Rui Mao, Zhiyuan Tian, Pengzhou Cheng, Tianjie Ju, Zheng Wu, Lingzhong Dong, Haiyue Sheng, Zhuosheng Zhang, Gongshen Liu View a PDF of the paper titled See, Think, Act: Teaching Multimodal Agents to Effectively Interact with GUI by Identifying Toggles, by Zongru Wu and 9 other authors View PDF HTML (experimental) Abstract:The advent of multimodal agents facilitates effective interaction within graphical user interface (GUI), especially in ubiquitous GUI control. However, their inability to reliably execute toggle control instructions remains a key bottleneck. To investigate this, we construct a state control benchmark with binary toggle instructions derived from public datasets. Evaluation results of existing agents demonstrate their notable unreliability, particularly when the current toggle state already matches the desired state. To address the challenge, we propose State-aware Reasoning (StaR), a multimodal reasoning method that enables agents to perceive the current toggle state, infer the desired state from the instruction, and act accordingly. Experiments on four multimodal agents demonstrate that StaR can improve toggle instruction execution accuracy by over 30\%. Further evaluations on three public age...