[2602.20001] FairFS: Addressing Deep Feature Selection Biases for Recommender System

[2602.20001] FairFS: Addressing Deep Feature Selection Biases for Recommender System

arXiv - Machine Learning 4 min read Article

Summary

The paper presents FairFS, a novel algorithm designed to address biases in feature selection for recommender systems, enhancing accuracy and performance.

Why It Matters

As recommender systems are integral to e-commerce, ensuring accurate feature selection is crucial for improving user experience and operational efficiency. FairFS addresses significant biases that can lead to suboptimal performance, making it a valuable contribution to the field of machine learning.

Key Takeaways

  • FairFS mitigates three key biases in feature selection: layer bias, baseline bias, and approximation bias.
  • The algorithm enhances feature importance estimation across all model layers, improving accuracy.
  • Extensive experiments show FairFS achieves state-of-the-art performance in feature selection.

Computer Science > Information Retrieval arXiv:2602.20001 (cs) [Submitted on 23 Feb 2026] Title:FairFS: Addressing Deep Feature Selection Biases for Recommender System Authors:Xianquan Wang, Zhaocheng Du, Jieming Zhu, Qinglin Jia, Zhenhua Dong, Kai Zhang View a PDF of the paper titled FairFS: Addressing Deep Feature Selection Biases for Recommender System, by Xianquan Wang and 5 other authors View PDF HTML (experimental) Abstract:Large-scale online marketplaces and recommender systems serve as critical technological support for e-commerce development. In industrial recommender systems, features play vital roles as they carry information for downstream models. Accurate feature importance estimation is critical because it helps identify the most useful feature subsets from thousands of feature candidates for online services. Such selection enables improved online performance while reducing computational cost. To address feature selection problems in deep learning, trainable gate-based and sensitivity-based methods have been proposed and proven effective in industrial practice. However, through the analysis of real-world cases, we identified three bias issues that cause feature importance estimation to rely on partial model layers, samples, or gradients, ultimately leading to inaccurate importance estimation. We refer to these as layer bias, baseline bias, and approximation bias. To mitigate these issues, we propose FairFS, a fair and accurate feature selection algorithm. Fai...

Related Articles

Machine Learning

[D] Budget Machine Learning Hardware

Looking to get into machine learning and found this video on a piece of hardware for less than £500. Is it really possible to teach auton...

Reddit - Machine Learning · 1 min ·
UMKC Announces New Master of Science in Artificial Intelligence
Ai Infrastructure

UMKC Announces New Master of Science in Artificial Intelligence

UMKC announces a new Master of Science in Artificial Intelligence program aimed at addressing workforce demand for AI expertise, set to l...

AI News - General · 4 min ·
Machine Learning

Your prompts aren’t the problem — something else is

I keep seeing people focus heavily on prompt optimization. But in practice, a lot of failures I’ve observed don’t come from the prompt it...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

[R], 31 MILLIONS High frequency data, Light GBM worked perfectly

We just published a paper on predicting adverse selection in high-frequency crypto markets using LightGBM, and I wanted to share it here ...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime