[2602.16033] Transforming GenAI Policy to Prompting Instruction: An RCT of Scalable Prompting Interventions in a CS1 Course

[2602.16033] Transforming GenAI Policy to Prompting Instruction: An RCT of Scalable Prompting Interventions in a CS1 Course

arXiv - AI 4 min read Article

Summary

This article presents a randomized controlled trial (RCT) examining scalable prompting interventions in a CS1 course, highlighting the importance of teaching students to effectively engage with Generative AI for improved learning outcomes.

Why It Matters

As Generative AI becomes increasingly integrated into educational settings, understanding how to effectively teach students to use AI as a learning tool is crucial. This study provides empirical evidence on the effectiveness of prompting interventions, offering insights into enhancing cognitive engagement and academic performance.

Key Takeaways

  • All prompting intervention conditions significantly improved students' prompting skills.
  • Higher engagement in prompting activities correlates with better academic performance.
  • The study validates the ICAP framework, emphasizing cognitive engagement in learning.
  • Interventions are scalable and adaptable for diverse educational contexts.
  • This research contributes to the theoretical understanding of prompting literacy in AI-enhanced learning.

Computer Science > Human-Computer Interaction arXiv:2602.16033 (cs) [Submitted on 17 Feb 2026] Title:Transforming GenAI Policy to Prompting Instruction: An RCT of Scalable Prompting Interventions in a CS1 Course Authors:Ruiwei Xiao, Runlong Ye, Xinying Hou, Jessica Wen, Harsh Kumar, Michael Liut, John Stamper View a PDF of the paper titled Transforming GenAI Policy to Prompting Instruction: An RCT of Scalable Prompting Interventions in a CS1 Course, by Ruiwei Xiao and 6 other authors View PDF HTML (experimental) Abstract:Despite universal GenAI adoption, students cannot distinguish task performance from actual learning and lack skills to leverage AI for learning, leading to worse exam performance when AI use remains unreflective. Yet few interventions teaching students to prompt AI as a tutor rather than solution provider have been validated at scale through randomized controlled trials (RCTs). To bridge this gap, we conducted a semester-long RCT (N=979) with four ICAP framework-based instructional conditions varying in engagement intensity with a pre-test, immediate and delayed post-test and surveys. Mixed methods analysis results showed: (1) All conditions significantly improved prompting skills, with gains increasing progressively from Condition 1 to Condition 4, validating ICAP's cognitive engagement hierarchy; (2) for students with similar pre-test scores, higher learning gain in immediate post-test predict higher final exam score, though no direct between-group diffe...

Related Articles

Generative Ai

Midjourney has a new offer on the cancel page there is 20 off for 2 months

submitted by /u/RainDragonfly826 [link] [comments]

Reddit - Artificial Intelligence · 1 min ·
Walmart CEO reportedly brags that company's in-app AI agent is making people spend 35% more money
Nlp

Walmart CEO reportedly brags that company's in-app AI agent is making people spend 35% more money

AI Tools & Products · 4 min ·
Llms

[R] Looking for arXiv cs.LG endorser, inference monitoring using information geometry

Hi r/MachineLearning, I’m looking for an arXiv endorser in cs.LG for a paper on inference-time distribution shift detection for deployed ...

Reddit - Machine Learning · 1 min ·
Nlp

[D] KDD Review Discussion

KDD 2026 (Feb Cycle) reviews will release today (4-April AoE), This thread is open to discuss about reviews and importantly celebrate suc...

Reddit - Machine Learning · 1 min ·
More in Nlp: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime