[2603.03680] MAGE: Meta-Reinforcement Learning for Language Agents toward Strategic Exploration and Exploitation
About this article
Abstract page for arXiv paper 2603.03680: MAGE: Meta-Reinforcement Learning for Language Agents toward Strategic Exploration and Exploitation
Computer Science > Artificial Intelligence arXiv:2603.03680 (cs) [Submitted on 4 Mar 2026] Title:MAGE: Meta-Reinforcement Learning for Language Agents toward Strategic Exploration and Exploitation Authors:Lu Yang, Zelai Xu, Minyang Xie, Jiaxuan Gao, Zhao Shok, Yu Wang, Yi Wu View a PDF of the paper titled MAGE: Meta-Reinforcement Learning for Language Agents toward Strategic Exploration and Exploitation, by Lu Yang and 6 other authors View PDF HTML (experimental) Abstract:Large Language Model (LLM) agents have demonstrated remarkable proficiency in learned tasks, yet they often struggle to adapt to non-stationary environments with feedback. While In-Context Learning and external memory offer some flexibility, they fail to internalize the adaptive ability required for long-term improvement. Meta-Reinforcement Learning (meta-RL) provides an alternative by embedding the learning process directly within the model. However, existing meta-RL approaches for LLMs focus primarily on exploration in single-agent settings, neglecting the strategic exploitation necessary for multi-agent environments. We propose MAGE, a meta-RL framework that empowers LLM agents for strategic exploration and exploitation. MAGE utilizes a multi-episode training regime where interaction histories and reflections are integrated into the context window. By using the final episode reward as the objective, MAGE incentivizes the agent to refine its strategy based on past experiences. We further combine populat...