[2601.08697] Auditing Student-AI Collaboration: A Case Study of Online Graduate CS Students
Summary
This study audits the collaboration between online graduate CS students and AI, exploring preferences for automation in academic tasks and identifying gaps in AI capabilities.
Why It Matters
As generative AI becomes integral to education, understanding student preferences and concerns is crucial for developing effective AI systems that enhance learning while maintaining academic integrity. This research highlights the need for AI tools that align with students' expectations and ethical considerations.
Key Takeaways
- Students have varying preferences for AI assistance in academic tasks.
- Concerns include over-automation and the reliability of AI outputs.
- The study identifies gaps between current AI capabilities and student expectations.
- Mixed-methods surveys provide insights into student experiences with AI.
- Effective AI design in education must address student concerns and preferences.
Computer Science > Human-Computer Interaction arXiv:2601.08697 (cs) [Submitted on 13 Jan 2026 (v1), last revised 19 Feb 2026 (this version, v3)] Title:Auditing Student-AI Collaboration: A Case Study of Online Graduate CS Students Authors:Nifu Dan View a PDF of the paper titled Auditing Student-AI Collaboration: A Case Study of Online Graduate CS Students, by Nifu Dan View PDF HTML (experimental) Abstract:As generative AI becomes embedded in higher education, it increasingly shapes how students complete academic tasks. While these systems offer efficiency and support, concerns persist regarding over-automation, diminished student agency, and the potential for unreliable or hallucinated outputs. This study conducts a mixed-methods audit of student-AI collaboration preferences by examining the alignment between current AI capabilities and students' desired levels of automation in academic work. Using two sequential and complementary surveys, we capture students' perceived benefits, risks, and preferred boundaries when using AI. The first survey employs an existing task-based framework to assess preferences for and actual usage of AI across 12 academic tasks, alongside primary concerns and reasons for use. The second survey, informed by the first, explores how AI systems could be designed to address these concerns through open-ended questions. This study aims to identify gaps between existing AI affordances and students' normative expectations of collaboration, informing the d...