[2603.03867] k-hop Fairness: Addressing Disparities in Graph Link Prediction Beyond First-Order Neighborhoods

[2603.03867] k-hop Fairness: Addressing Disparities in Graph Link Prediction Beyond First-Order Neighborhoods

arXiv - Machine Learning 4 min read

About this article

Abstract page for arXiv paper 2603.03867: k-hop Fairness: Addressing Disparities in Graph Link Prediction Beyond First-Order Neighborhoods

Computer Science > Machine Learning arXiv:2603.03867 (cs) [Submitted on 4 Mar 2026] Title:k-hop Fairness: Addressing Disparities in Graph Link Prediction Beyond First-Order Neighborhoods Authors:Lilian Marey, Tiphaine Viard, Charlotte Laclau View a PDF of the paper titled k-hop Fairness: Addressing Disparities in Graph Link Prediction Beyond First-Order Neighborhoods, by Lilian Marey and 2 other authors View PDF HTML (experimental) Abstract:Link prediction (LP) plays a central role in graph-based applications, particularly in social recommendation. However, real-world graphs often reflect structural biases, most notably homophily, the tendency of nodes with similar attributes to connect. While this property can improve predictive performance, it also risks reinforcing existing social disparities. In response, fairness-aware LP methods have emerged, often seeking to mitigate these effects by promoting inter-group connections, that is, links between nodes with differing sensitive attributes (e.g., gender), following the principle of dyadic fairness. However, dyadic fairness overlooks potential disparities within the sensitive groups themselves. To overcome this issue, we propose $k$-hop fairness, a structural notion of fairness for LP, that assesses disparities conditioned on the distance between nodes in the graph. We formalize this notion through predictive fairness and structural bias metrics, and propose pre- and post-processing mitigation strategies. Experiments across ...

Originally published on March 05, 2026. Curated by AI News.

Related Articles

Machine Learning

[R] I trained a 3k parameter model on XOR sequences of length 20. It extrapolates perfectly to length 1,000,000. Here's why I think that's architecturally significant.

I've been working on an alternative to attention-based sequence modeling that I'm calling Geometric Flow Networks (GFN). The core idea: i...

Reddit - Machine Learning · 1 min ·
Machine Learning

[D] Data curation and targeted replacement as a pre-training alignment and controllability method

Hi, r/MachineLearning: has much research been done in large-scale training scenarios where undesirable data has been replaced before trai...

Reddit - Machine Learning · 1 min ·
Ai Safety

I’ve come up with a new thought experiment to approach ASI, and it challenges the very notions of alignment and containment

I’ve written an essay exploring what I’m calling the Super-Intelligent Octopus Problem—a thought experiment designed to surface a paradox...

Reddit - Artificial Intelligence · 1 min ·
Ai Safety

Bias in AI: Examples and 6 Ways to Fix it in 2026

AI bias is an anomaly in the output of ML algorithms due to prejudiced assumptions. Explore types of AI bias, examples, how to reduce bia...

AI Events · 36 min ·
More in Ai Safety: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime