[2602.17930] MIRA: Memory-Integrated Reinforcement Learning Agent with Limited LLM Guidance
Summary
The paper presents MIRA, a Memory-Integrated Reinforcement Learning Agent that reduces reliance on large language models (LLMs) by utilizing a structured memory graph to enhance learning in sparse reward environments.
Why It Matters
MIRA addresses the challenges of high sample complexity in reinforcement learning by integrating a memory graph that captures decision-relevant information. This innovation allows for improved early-stage learning while minimizing dependence on LLMs, making it a significant advancement in AI training methodologies.
Key Takeaways
- MIRA utilizes a memory graph to guide reinforcement learning, reducing the need for constant LLM supervision.
- The memory graph stores high-return experiences and LLM outputs, facilitating better policy updates.
- The approach shows improved early-stage learning in environments with sparse rewards.
- MIRA achieves performance comparable to LLM-dependent methods while requiring fewer online queries.
- Theoretical analysis supports the utility-based shaping for enhanced learning efficiency.
Computer Science > Machine Learning arXiv:2602.17930 (cs) [Submitted on 20 Feb 2026] Title:MIRA: Memory-Integrated Reinforcement Learning Agent with Limited LLM Guidance Authors:Narjes Nourzad, Carlee Joe-Wong View a PDF of the paper titled MIRA: Memory-Integrated Reinforcement Learning Agent with Limited LLM Guidance, by Narjes Nourzad and 1 other authors View PDF HTML (experimental) Abstract:Reinforcement learning (RL) agents often suffer from high sample complexity in sparse or delayed reward settings due to limited prior structure. Large language models (LLMs) can provide subgoal decompositions, plausible trajectories, and abstract priors that facilitate early learning. However, heavy reliance on LLM supervision introduces scalability constraints and dependence on potentially unreliable signals. We propose MIRA (Memory-Integrated Reinforcement Learning Agent), which incorporates a structured, evolving memory graph to guide early training. The graph stores decision-relevant information, including trajectory segments and subgoal structures, and is constructed from both the agent's high-return experiences and LLM outputs. This design amortizes LLM queries into a persistent memory rather than requiring continuous real-time supervision. From this memory graph, we derive a utility signal that softly adjusts advantage estimation to influence policy updates without modifying the underlying reward function. As training progresses, the agent's policy gradually surpasses the init...