[2603.04718] AI-Assisted Moot Courts: Simulating Justice-Specific Questioning in Oral Arguments
About this article
Abstract page for arXiv paper 2603.04718: AI-Assisted Moot Courts: Simulating Justice-Specific Questioning in Oral Arguments
Computer Science > Computation and Language arXiv:2603.04718 (cs) [Submitted on 5 Mar 2026] Title:AI-Assisted Moot Courts: Simulating Justice-Specific Questioning in Oral Arguments Authors:Kylie Zhang, Nimra Nadeem, Lucia Zheng, Dominik Stammbach, Peter Henderson View a PDF of the paper titled AI-Assisted Moot Courts: Simulating Justice-Specific Questioning in Oral Arguments, by Kylie Zhang and 4 other authors View PDF HTML (experimental) Abstract:In oral arguments, judges probe attorneys with questions about the factual record, legal claims, and the strength of their arguments. To prepare for this questioning, both law schools and practicing attorneys rely on moot courts: practice simulations of appellate hearings. Leveraging a dataset of U.S. Supreme Court oral argument transcripts, we examine whether AI models can effectively simulate justice-specific questioning for moot court-style training. Evaluating oral argument simulation is challenging because there is no single correct question for any given turn. Instead, effective questioning should reflect a combination of desirable qualities, such as anticipating substantive legal issues, detecting logical weaknesses, and maintaining an appropriately adversarial tone. We introduce a two-layer evaluation framework that assesses both the realism and pedagogical usefulness of simulated questions using complementary proxy metrics. We construct and evaluate both prompt-based and agentic oral argument simulators. We find that sim...