[2602.05999] On the Role of Iterative Computation in Reinforcement Learning
Summary
This paper explores how the amount of compute available to reinforcement learning (RL) policies influences their learning capabilities and generalization, proposing a minimal architecture that adapts compute usage for improved performance.
Why It Matters
Understanding the relationship between compute resources and RL policy performance is crucial for developing more efficient algorithms. This research formalizes compute-bounded policies and demonstrates that increased compute can enhance problem-solving and generalization, which is vital for advancing RL applications in complex environments.
Key Takeaways
- The paper formalizes the concept of compute-bounded policies in RL.
- Increased compute allows policies to solve more complex problems and generalize better.
- A minimal architecture proposed can utilize varying compute amounts effectively.
- Empirical results show significant performance improvements with more compute.
- The findings challenge traditional views on the relationship between parameters and compute in RL.
Computer Science > Machine Learning arXiv:2602.05999 (cs) [Submitted on 5 Feb 2026 (v1), last revised 17 Feb 2026 (this version, v2)] Title:On the Role of Iterative Computation in Reinforcement Learning Authors:Raj Ghugare, Michał Bortkiewicz, Alicja Ziarko, Benjamin Eysenbach View a PDF of the paper titled On the Role of Iterative Computation in Reinforcement Learning, by Raj Ghugare and 3 other authors View PDF HTML (experimental) Abstract:How does the amount of compute available to a reinforcement learning (RL) policy affect its learning? Can policies using a fixed amount of parameters, still benefit from additional compute? The standard RL framework does not provide a language to answer these questions formally. Empirically, deep RL policies are often parameterized as neural networks with static architectures, conflating the amount of compute and the number of parameters. In this paper, we formalize compute bounded policies and prove that policies which use more compute can solve problems and generalize to longer-horizon tasks that are outside the scope of policies with less compute. Building on prior work in algorithmic learning and model-free planning, we propose a minimal architecture that can use a variable amount of compute. Our experiments complement our theory. On a set 31 different tasks spanning online and offline RL, we show that $(1)$ this architecture achieves stronger performance simply by using more compute, and $(2)$ stronger generalization on longer-hor...