[2603.02951] CGL: Advancing Continual GUI Learning via Reinforcement Fine-Tuning
About this article
Abstract page for arXiv paper 2603.02951: CGL: Advancing Continual GUI Learning via Reinforcement Fine-Tuning
Computer Science > Machine Learning arXiv:2603.02951 (cs) [Submitted on 3 Mar 2026] Title:CGL: Advancing Continual GUI Learning via Reinforcement Fine-Tuning Authors:Zhenquan Yao, Zitong Huang, Yihan Zeng, Jianhua Han, Hang Xu, Chun-Mei Feng, Jianwei Ma, Wangmeng Zuo View a PDF of the paper titled CGL: Advancing Continual GUI Learning via Reinforcement Fine-Tuning, by Zhenquan Yao and 6 other authors View PDF HTML (experimental) Abstract:Graphical User Interface (GUI) Agents, benefiting from recent advances in multimodal large language models (MLLM), have achieved significant development. However, due to the frequent updates of GUI applications, adapting to new tasks without forgetting old tasks in GUI continual learning remains an open problem. In this work, we reveal that while Supervised Fine-Tuning (SFT) facilitates fast adaptation, it often triggers knowledge overwriting, whereas Reinforcement Learning (RL) demonstrates an inherent resilience that shields prior interaction logic from erasure. Based on this insight, we propose a \textbf{C}ontinual \textbf{G}UI \textbf{L}earning (CGL) framework that dynamically balances adaptation efficiency and skill retention by enhancing the synergy between SFT and RL. Specifically, we introduce an SFT proportion adjustment mechanism guided by policy entropy to dynamically control the weight allocation between the SFT and RL training phases. To resolve explicit gradient interference, we further develop a specialized gradient surgery ...