[2603.23755] Self Paced Gaussian Contextual Reinforcement Learning
About this article
Abstract page for arXiv paper 2603.23755: Self Paced Gaussian Contextual Reinforcement Learning
Computer Science > Machine Learning arXiv:2603.23755 (cs) [Submitted on 24 Mar 2026] Title:Self Paced Gaussian Contextual Reinforcement Learning Authors:Mohsen Sahraei Ardakani, Rui Song View a PDF of the paper titled Self Paced Gaussian Contextual Reinforcement Learning, by Mohsen Sahraei Ardakani and 1 other authors View PDF HTML (experimental) Abstract:Curriculum learning improves reinforcement learning (RL) efficiency by sequencing tasks from simple to complex. However, many self-paced curriculum methods rely on computationally expensive inner-loop optimizations, limiting their scalability in high-dimensional context spaces. In this paper, we propose Self-Paced Gaussian Curriculum Learning (SPGL), a novel approach that avoids costly numerical procedures by leveraging a closed-form update rule for Gaussian context distributions. SPGL maintains the sample efficiency and adaptability of traditional self-paced methods while substantially reducing computational overhead. We provide theoretical guarantees on convergence and validate our method across several contextual RL benchmarks, including the Point Mass, Lunar Lander, and Ball Catching environments. Experimental results show that SPGL matches or outperforms existing curriculum methods, especially in hidden context scenarios, and achieves more stable context distribution convergence. Our method offers a scalable, principled alternative for curriculum generation in challenging continuous and partially observable domains. ...