[2602.07754] Humanizing AI Grading: Student-Centered Insights on Fairness, Trust, Consistency and Transparency

[2602.07754] Humanizing AI Grading: Student-Centered Insights on Fairness, Trust, Consistency and Transparency

arXiv - AI 3 min read Article

Summary

This study explores student perceptions of AI grading systems, focusing on fairness, trust, consistency, and transparency in an undergraduate computer science course.

Why It Matters

As AI grading systems become more prevalent in education, understanding student perspectives on fairness and transparency is crucial for developing equitable AI tools. This research highlights the need for AI systems that incorporate human judgment and empathy, ensuring they serve as supportive tools rather than replacements.

Key Takeaways

  • Students express concerns about AI's contextual understanding in grading.
  • AI grading should reflect human judgment and flexibility.
  • Trust and transparency are essential for effective AI grading systems.
  • The study emphasizes the importance of student voices in AI design.
  • AI should serve as a supplementary tool under human oversight.

Computer Science > Artificial Intelligence arXiv:2602.07754 (cs) [Submitted on 8 Feb 2026 (v1), last revised 22 Feb 2026 (this version, v2)] Title:Humanizing AI Grading: Student-Centered Insights on Fairness, Trust, Consistency and Transparency Authors:Bahare Riahi, Viktoriia Storozhevykh, Veronica Catete View a PDF of the paper titled Humanizing AI Grading: Student-Centered Insights on Fairness, Trust, Consistency and Transparency, by Bahare Riahi and 2 other authors View PDF HTML (experimental) Abstract:This study investigates students' perceptions of Artificial Intelligence (AI) grading systems in an undergraduate computer science course (n = 27), focusing on a block-based programming final project. Guided by the ethical principles framework articulated by Jobin (2019), our study examines fairness, trust, consistency, and transparency in AI grading by comparing AI-generated feedback with original human-graded feedback. Findings reveal concerns about AI's lack of contextual understanding and personalization. We recommend that equitable and trustworthy AI systems reflect human judgment, flexibility, and empathy, serving as supplementary tools under human oversight. This work contributes to ethics-centered assessment practices by amplifying student voices and offering design principles for humanizing AI in designed learning environments. Comments: Subjects: Artificial Intelligence (cs.AI); Human-Computer Interaction (cs.HC) ACM classes: I.2.6; I.2.7 Cite as: arXiv:2602.077...

Related Articles

Ai Safety

NHS staff resist using Palantir software. Staff reportedly cite ethics concerns, privacy worries, and doubt the platform adds much

submitted by /u/esporx [link] [comments]

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

AI assistants are optimized to seem helpful. That is not the same thing as being helpful.

RLHF trains models on human feedback. Humans rate responses they like. And it turns out humans consistently rate confident, fluent, agree...

Reddit - Artificial Intelligence · 1 min ·
Computer Vision

House Democrat Questions Anthropic on AI Safety After Source Code Leak

Rep. Josh Gottheimer, who is generally tough on China, just sent a letter to Anthropic questioning their decision to reduce certain safet...

Reddit - Artificial Intelligence · 1 min ·
[2512.21106] Semantic Refinement with LLMs for Graph Representations
Llms

[2512.21106] Semantic Refinement with LLMs for Graph Representations

Abstract page for arXiv paper 2512.21106: Semantic Refinement with LLMs for Graph Representations

arXiv - Machine Learning · 4 min ·
More in Ai Safety: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime