[2602.16033] Transforming GenAI Policy to Prompting Instruction: An RCT of Scalable Prompting Interventions in a CS1 Course
Summary
This article presents a randomized controlled trial (RCT) examining scalable prompting interventions in a CS1 course, highlighting the importance of teaching students to effectively engage with Generative AI for improved learning outcomes.
Why It Matters
As Generative AI becomes increasingly integrated into educational settings, understanding how to effectively teach students to use AI as a learning tool is crucial. This study provides empirical evidence on the effectiveness of prompting interventions, offering insights into enhancing cognitive engagement and academic performance.
Key Takeaways
- All prompting intervention conditions significantly improved students' prompting skills.
- Higher engagement in prompting activities correlates with better academic performance.
- The study validates the ICAP framework, emphasizing cognitive engagement in learning.
- Interventions are scalable and adaptable for diverse educational contexts.
- This research contributes to the theoretical understanding of prompting literacy in AI-enhanced learning.
Computer Science > Human-Computer Interaction arXiv:2602.16033 (cs) [Submitted on 17 Feb 2026] Title:Transforming GenAI Policy to Prompting Instruction: An RCT of Scalable Prompting Interventions in a CS1 Course Authors:Ruiwei Xiao, Runlong Ye, Xinying Hou, Jessica Wen, Harsh Kumar, Michael Liut, John Stamper View a PDF of the paper titled Transforming GenAI Policy to Prompting Instruction: An RCT of Scalable Prompting Interventions in a CS1 Course, by Ruiwei Xiao and 6 other authors View PDF HTML (experimental) Abstract:Despite universal GenAI adoption, students cannot distinguish task performance from actual learning and lack skills to leverage AI for learning, leading to worse exam performance when AI use remains unreflective. Yet few interventions teaching students to prompt AI as a tutor rather than solution provider have been validated at scale through randomized controlled trials (RCTs). To bridge this gap, we conducted a semester-long RCT (N=979) with four ICAP framework-based instructional conditions varying in engagement intensity with a pre-test, immediate and delayed post-test and surveys. Mixed methods analysis results showed: (1) All conditions significantly improved prompting skills, with gains increasing progressively from Condition 1 to Condition 4, validating ICAP's cognitive engagement hierarchy; (2) for students with similar pre-test scores, higher learning gain in immediate post-test predict higher final exam score, though no direct between-group diffe...