[2505.19441] Fairness-in-the-Workflow: How Machine Learning Practitioners at Big Tech Companies Approach Fairness in Recommender Systems
About this article
Abstract page for arXiv paper 2505.19441: Fairness-in-the-Workflow: How Machine Learning Practitioners at Big Tech Companies Approach Fairness in Recommender Systems
Computer Science > Human-Computer Interaction arXiv:2505.19441 (cs) [Submitted on 26 May 2025 (v1), last revised 27 Feb 2026 (this version, v3)] Title:Fairness-in-the-Workflow: How Machine Learning Practitioners at Big Tech Companies Approach Fairness in Recommender Systems Authors:Jing Nathan Yan, Emma Harvey, Junxiong Wang, Jeffrey M. Rzeszotarski, Allison Koenecke View a PDF of the paper titled Fairness-in-the-Workflow: How Machine Learning Practitioners at Big Tech Companies Approach Fairness in Recommender Systems, by Jing Nathan Yan and 4 other authors View PDF HTML (experimental) Abstract:Recommender systems (RS), which are widely deployed across high-stakes domains, are susceptible to biases that can cause large-scale societal impacts. Researchers have proposed methods to measure and mitigate such biases - but translating academic theory into practice is inherently challenging. Through a semi-structured interview study (N=11), we map the RS practitioner workflow within large technology companies, focusing on how technical teams consider fairness internally and in collaboration with legal, data, and fairness teams. We identify key challenges to incorporating fairness into existing RS workflows: defining fairness in RS contexts, balancing multi-stakeholder interests, and navigating dynamic environments. We also identify key organization-wide challenges: making time for fairness work and facilitating cross-team communication. Finally, we offer actionable recommendatio...