[2602.19441] When AI Teammates Meet Code Review: Collaboration Signals Shaping the Integration of Agent-Authored Pull Requests
Summary
This paper investigates how AI-generated pull requests integrate into human-led code review processes, emphasizing the importance of collaboration signals and reviewer engagement.
Why It Matters
As AI tools increasingly participate in software development, understanding their integration into existing workflows is crucial for optimizing collaboration between human developers and AI agents. This research provides insights that can enhance the effectiveness of code reviews and improve software quality.
Key Takeaways
- Reviewer engagement is critical for the successful integration of AI-authored pull requests.
- Larger change sizes and disruptive actions negatively impact the likelihood of merging.
- Effective integration relies on alignment with established review practices, not just code quality.
- Actionable review loops between AI and human reviewers enhance integration success.
- Collaboration signals are essential in shaping the outcomes of AI contributions.
Computer Science > Software Engineering arXiv:2602.19441 (cs) [Submitted on 23 Feb 2026] Title:When AI Teammates Meet Code Review: Collaboration Signals Shaping the Integration of Agent-Authored Pull Requests Authors:Costain Nachuma, Minhaz Zibran View a PDF of the paper titled When AI Teammates Meet Code Review: Collaboration Signals Shaping the Integration of Agent-Authored Pull Requests, by Costain Nachuma and 1 other authors View PDF HTML (experimental) Abstract:Autonomous coding agents increasingly contribute to software development by submitting pull requests on GitHub; yet, little is known about how these contributions integrate into human-driven review workflows. We present a large empirical study of agent-authored pull requests using the public AIDev dataset, examining integration outcomes, resolution speed, and review-time collaboration signals. Using logistic regression with repository-clustered standard errors, we find that reviewer engagement has the strongest correlation with successful integration, whereas larger change sizes and coordination-disrupting actions, such as force pushes, are associated with a lower likelihood of merging. In contrast, iteration intensity alone provides limited explanatory power once collaboration signals are considered. A qualitative analysis further shows that successful integration occurs when agents engage in actionable review loops that converge toward reviewer expectations. Overall, our results highlight that the effective i...