[2411.09847] Towards a Fairer Non-negative Matrix Factorization

[2411.09847] Towards a Fairer Non-negative Matrix Factorization

arXiv - Machine Learning 4 min read Article

Summary

This article presents a novel approach to Non-negative Matrix Factorization (NMF) aimed at improving fairness in machine learning algorithms by modifying the objective function with a min-max formulation.

Why It Matters

As fairness in machine learning becomes increasingly critical, this research addresses the need for practical bias mitigation strategies. By exploring NMF, a widely used method in topic modeling and feature extraction, the authors contribute to the ongoing discourse on balancing fairness and accuracy in algorithmic decision-making.

Key Takeaways

  • Introduces a min-max formulation to enhance fairness in NMF.
  • Demonstrates that fairness improvements may come at the cost of increased error for some individuals.
  • Emphasizes the importance of context in defining fairness and selecting methods.

Computer Science > Machine Learning arXiv:2411.09847 (cs) [Submitted on 14 Nov 2024 (v1), last revised 24 Feb 2026 (this version, v2)] Title:Towards a Fairer Non-negative Matrix Factorization Authors:Lara Kassab, Erin George, Deanna Needell, Haowen Geng, Nika Jafar Nia, Aoxi Li View a PDF of the paper titled Towards a Fairer Non-negative Matrix Factorization, by Lara Kassab and 5 other authors View PDF HTML (experimental) Abstract:There has been a recent critical need to study fairness and bias in machine learning (ML) algorithms. Since there is clearly no one-size-fits-all solution to fairness, ML methods should be developed alongside bias mitigation strategies that are practical and approachable to the practitioner. Motivated by recent work on ``fair" PCA, here we consider the more challenging method of non-negative matrix factorization (NMF) as both a showcasing example and a method that is important in its own right for both topic modeling tasks and feature extraction for other ML tasks. We demonstrate that a modification of the objective function, by using a min-max formulation, may \textit{sometimes} be able to offer an improvement in fairness for groups in the population. We derive two methods for the objective minimization, a multiplicative update rule as well as an alternating minimization scheme, and discuss implementation practicalities. We include a suite of synthetic and real experiments that show how the method may improve fairness while also highlighting the...

Related Articles

Yupp shuts down after raising $33M from a16z crypto's Chris Dixon | TechCrunch
Machine Learning

Yupp shuts down after raising $33M from a16z crypto's Chris Dixon | TechCrunch

Less than a year after launching, with checks from some of the biggest names in Silicon Valley, crowdsourced AI model feedback startup Yu...

TechCrunch - AI · 4 min ·
Machine Learning

[R] Fine-tuning services report

If you have some data and want to train or run a small custom model but don't have powerful enough hardware for training, fine-tuning ser...

Reddit - Machine Learning · 1 min ·
Machine Learning

[D] Does ML have a "bible"/reference textbook at the Intermediate/Advanced level?

Hello, everyone! This is my first time posting here and I apologise if the question is, perhaps, a bit too basic for this sub-reddit. A b...

Reddit - Machine Learning · 1 min ·
Machine Learning

[D] ICML 2026 review policy debate: 100 responses suggest Policy B may score higher, while Policy A shows higher confidence

A week ago I made a thread asking whether ICML 2026’s review policy might have affected review outcomes, especially whether Policy A pape...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime