[2508.08337] Position: Beyond Sensitive Attributes, ML Fairness Should Quantify Structural Injustice via Social Determinants

[2508.08337] Position: Beyond Sensitive Attributes, ML Fairness Should Quantify Structural Injustice via Social Determinants

arXiv - Machine Learning 4 min read Article

Summary

This paper argues for a shift in machine learning fairness research to focus on structural injustice through social determinants, rather than solely on sensitive attributes.

Why It Matters

The discussion on machine learning fairness is crucial as it impacts how algorithms affect marginalized groups. By emphasizing structural injustice, the authors highlight the need for a more comprehensive understanding of fairness that can lead to better outcomes in critical areas like healthcare and education.

Key Takeaways

  • Current fairness frameworks often overlook structural injustices.
  • Social determinants should be quantified to better understand unfairness.
  • Mitigation strategies based solely on sensitive attributes can exacerbate injustices.
  • The paper presents a theoretical model to illustrate these concepts.
  • Auditing structural injustice is essential before implementing fairness measures.

Computer Science > Computers and Society arXiv:2508.08337 (cs) [Submitted on 10 Aug 2025 (v1), last revised 25 Feb 2026 (this version, v2)] Title:Position: Beyond Sensitive Attributes, ML Fairness Should Quantify Structural Injustice via Social Determinants Authors:Zeyu Tang, Alex John London, Atoosa Kasirzadeh, Sarah Stewart de Ramirez, Peter Spirtes, Kun Zhang, Sanmi Koyejo View a PDF of the paper titled Position: Beyond Sensitive Attributes, ML Fairness Should Quantify Structural Injustice via Social Determinants, by Zeyu Tang and 6 other authors View PDF HTML (experimental) Abstract:Algorithmic fairness research has largely framed unfairness as discrimination along sensitive attributes. However, this approach limits visibility into unfairness as structural injustice instantiated through social determinants, which are contextual variables that shape attributes and outcomes without pertaining to specific individuals. This position paper argues that the field should quantify structural injustice via social determinants, beyond sensitive attributes. Drawing on cross-disciplinary insights, we argue that prevailing technical paradigms fail to adequately capture unfairness as structural injustice, because contexts are potentially treated as noise to be normalized rather than signal to be audited. We further demonstrate the practical urgency of this shift through a theoretical model of college admissions, a demographic study using U.S. census data, and a high-stakes domain app...

Related Articles

Yupp shuts down after raising $33M from a16z crypto's Chris Dixon | TechCrunch
Machine Learning

Yupp shuts down after raising $33M from a16z crypto's Chris Dixon | TechCrunch

Less than a year after launching, with checks from some of the biggest names in Silicon Valley, crowdsourced AI model feedback startup Yu...

TechCrunch - AI · 4 min ·
Machine Learning

[R] Fine-tuning services report

If you have some data and want to train or run a small custom model but don't have powerful enough hardware for training, fine-tuning ser...

Reddit - Machine Learning · 1 min ·
Machine Learning

[D] Does ML have a "bible"/reference textbook at the Intermediate/Advanced level?

Hello, everyone! This is my first time posting here and I apologise if the question is, perhaps, a bit too basic for this sub-reddit. A b...

Reddit - Machine Learning · 1 min ·
Machine Learning

[D] ICML 2026 review policy debate: 100 responses suggest Policy B may score higher, while Policy A shows higher confidence

A week ago I made a thread asking whether ICML 2026’s review policy might have affected review outcomes, especially whether Policy A pape...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime