[2505.08264] Automatic Curriculum Learning for Driving Scenarios: Towards Robust and Efficient Reinforcement Learning
About this article
Abstract page for arXiv paper 2505.08264: Automatic Curriculum Learning for Driving Scenarios: Towards Robust and Efficient Reinforcement Learning
Computer Science > Robotics arXiv:2505.08264 (cs) [Submitted on 13 May 2025 (v1), last revised 5 Mar 2026 (this version, v3)] Title:Automatic Curriculum Learning for Driving Scenarios: Towards Robust and Efficient Reinforcement Learning Authors:Ahmed Abouelazm, Tim Weinstein, Tim Joseph, Philip Schörner, J. Marius Zöllner View a PDF of the paper titled Automatic Curriculum Learning for Driving Scenarios: Towards Robust and Efficient Reinforcement Learning, by Ahmed Abouelazm and 4 other authors View PDF HTML (experimental) Abstract:This paper addresses the challenges of training end-to-end autonomous driving agents using Reinforcement Learning (RL). RL agents are typically trained in a fixed set of scenarios and nominal behavior of surrounding road users in simulations, limiting their generalization and real-life deployment. While domain randomization offers a potential solution by randomly sampling driving scenarios, it frequently results in inefficient training and sub-optimal policies due to the high variance among training scenarios. To address these limitations, we propose an automatic curriculum learning framework that dynamically generates driving scenarios with adaptive complexity based on the agent's evolving capabilities. Unlike manually designed curricula that introduce expert bias and lack scalability, our framework incorporates a ``teacher'' that automatically generates and mutates driving scenarios based on their learning potential -- an agent-centric metric ...