[2602.22438] From Bias to Balance: Fairness-Aware Paper Recommendation for Equitable Peer Review
Summary
This paper introduces Fair-PaperRec, a fairness-aware paper recommendation system designed to mitigate biases in peer review, enhancing equity in scholarly participation.
Why It Matters
The study addresses systemic biases in academic peer review, which often disadvantage underrepresented groups. By implementing a fairness regularizer in paper recommendations, the research offers a practical solution to promote equity while maintaining quality, thus fostering a more inclusive academic environment.
Key Takeaways
- Fair-PaperRec utilizes a fairness regularizer to improve equity in paper recommendations.
- The system demonstrated a 42.03% increase in participation from underrepresented groups with minimal impact on overall quality.
- Controlled studies confirm the robustness of fairness parameters across various bias levels.
- The framework is adaptable for real-world applications, enhancing diversity in academic publishing.
- Fairness regularization can serve as both an equity mechanism and a quality enhancer.
Computer Science > Machine Learning arXiv:2602.22438 (cs) [Submitted on 25 Feb 2026] Title:From Bias to Balance: Fairness-Aware Paper Recommendation for Equitable Peer Review Authors:Uttamasha Anjally Oyshi, Susan Gauch View a PDF of the paper titled From Bias to Balance: Fairness-Aware Paper Recommendation for Equitable Peer Review, by Uttamasha Anjally Oyshi and 1 other authors View PDF HTML (experimental) Abstract:Despite frequent double-blind review, systemic biases related to author demographics still disadvantage underrepresented groups. We start from a simple hypothesis: if a post-review recommender is trained with an explicit fairness regularizer, it should increase inclusion without degrading quality. To test this, we introduce Fair-PaperRec, a Multi-Layer Perceptron (MLP) with a differentiable fairness loss over intersectional attributes (e.g., race, country) that re-ranks papers after double-blind review. We first probe the hypothesis on synthetic datasets spanning high, moderate, and near-fair biases. Across multiple randomized runs, these controlled studies map where increasing the fairness weight strengthens macro/micro diversity while keeping utility approximately stable, demonstrating robustness and adaptability under varying disparity levels. We then carry the hypothesis into the original setting, conference data from ACM Special Interest Group on Computer-Human Interaction (SIGCHI), Designing Interactive Systems (DIS), and Intelligent User Interfaces (IUI...