[2603.27695] Optimizing Coverage and Difficulty in Reinforcement Learning for Quiz Composition
About this article
Abstract page for arXiv paper 2603.27695: Optimizing Coverage and Difficulty in Reinforcement Learning for Quiz Composition
Computer Science > Machine Learning arXiv:2603.27695 (cs) [Submitted on 29 Mar 2026] Title:Optimizing Coverage and Difficulty in Reinforcement Learning for Quiz Composition Authors:Ricardo Pedro Querido Andrade Silva, Nassim Bouarour, Dina Fettache, Sarab Boussouar, Noha Ibrahim, Sihem Amer-Yahia View a PDF of the paper titled Optimizing Coverage and Difficulty in Reinforcement Learning for Quiz Composition, by Ricardo Pedro Querido Andrade Silva and 5 other authors View PDF HTML (experimental) Abstract:Quiz design is a tedious process that teachers undertake to evaluate the acquisition of knowledge by students. Our goal in this paper is to automate quiz composition from a set of multiple choice questions (MCQs). We formalize a generic sequential decision-making problem with the goal of training an agent to compose a quiz that meets the desired topic coverage and difficulty levels. We investigate DQN, SARSA and A2C/A3C, three reinforcement learning solutions to solve our problem. We run extensive experiments on synthetic and real datasets that study the ability of RL to land on the best quiz. Our results reveal subtle differences in agent behavior and in transfer learning with different data distributions and teacher goals. This was supported by our user study, paving the way for automating various teachers' pedagogical goals. Subjects: Machine Learning (cs.LG) Cite as: arXiv:2603.27695 [cs.LG] (or arXiv:2603.27695v1 [cs.LG] for this version) https://doi.org/10.48550/a...