[2602.13817] What happens when reviewers receive AI feedback in their reviews?

[2602.13817] What happens when reviewers receive AI feedback in their reviews?

arXiv - AI 3 min read Article

Summary

This article examines the impact of AI feedback on peer reviews, revealing both benefits and challenges faced by reviewers when using an AI tool during the review process.

Why It Matters

As AI continues to influence various fields, understanding its role in peer review is crucial. This study provides empirical evidence on how AI feedback affects reviewer engagement and perceptions, highlighting the balance between enhancing review quality and maintaining human oversight.

Key Takeaways

  • AI feedback tools can reduce reviewer burden and improve review quality.
  • Reviewers express mixed feelings about the usability and impact of AI suggestions.
  • The study provides the first empirical evidence of AI in a live peer review context.
  • Balancing AI assistance with human expertise is essential for effective peer review.
  • Design implications are offered to enhance AI-assisted reviewing processes.

Computer Science > Human-Computer Interaction arXiv:2602.13817 (cs) [Submitted on 14 Feb 2026] Title:What happens when reviewers receive AI feedback in their reviews? Authors:Shiping Chen, Shu Zhong, Duncan P. Brumby, Anna L. Cox View a PDF of the paper titled What happens when reviewers receive AI feedback in their reviews?, by Shiping Chen and 3 other authors View PDF HTML (experimental) Abstract:AI is reshaping academic research, yet its role in peer review remains polarising and contentious. Advocates see its potential to reduce reviewer burden and improve quality, while critics warn of risks to fairness, accountability, and trust. At ICLR 2025, an official AI feedback tool was deployed to provide reviewers with post-review suggestions. We studied this deployment through surveys and interviews, investigating how reviewers engaged with the tool and perceived its usability and impact. Our findings surface both opportunities and tensions when AI augments in peer review. This work contributes the first empirical evidence of such an AI tool in a live review process, documenting how reviewers respond to AI-generated feedback in a high-stakes review context. We further offer design implications for AI-assisted reviewing that aim to enhance quality while safeguarding human expertise, agency, and responsibility. Comments: Subjects: Human-Computer Interaction (cs.HC); Artificial Intelligence (cs.AI) Cite as: arXiv:2602.13817 [cs.HC]   (or arXiv:2602.13817v1 [cs.HC] for this vers...

Related Articles

Firmus, the 'Southgate' AI datacenter builder backed by Nvidia, hits $5.5B valuation | TechCrunch
Ai Infrastructure

Firmus, the 'Southgate' AI datacenter builder backed by Nvidia, hits $5.5B valuation | TechCrunch

Nvidia-backed Asia AI data center provider Firmus has now raised $1.35 billion in six months.

TechCrunch - AI · 3 min ·
Anthropic debuts ‘Project Glasswing’ and new AI model for cybersecurity | The Verge
Machine Learning

Anthropic debuts ‘Project Glasswing’ and new AI model for cybersecurity | The Verge

Anthropic launched Project Glasswing, a cybersecurity initiative in which it’s partnering with Nvidia, Apple, and others, and debuted a n...

The Verge - AI · 5 min ·
Nlp

Has anyone here switched to TeraBox recently? Is it actually worth it?

I’ve been seeing more people talk about TeraBox lately, especially around storage for AI-related workflows. Curious if anyone here has us...

Reddit - Artificial Intelligence · 1 min ·
Llms

ParetoBandit: Budget-Paced Adaptive Routing for Non-Stationary LLM Serving

submitted by /u/PatienceHistorical70 [link] [comments]

Reddit - Machine Learning · 1 min ·
More in Ai Infrastructure: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime