[2404.17768] Changing the Training Data Distribution to Reduce Simplicity Bias Improves In-distribution Generalization
About this article
Abstract page for arXiv paper 2404.17768: Changing the Training Data Distribution to Reduce Simplicity Bias Improves In-distribution Generalization
Computer Science > Machine Learning arXiv:2404.17768 (cs) [Submitted on 27 Apr 2024 (v1), last revised 2 Mar 2026 (this version, v3)] Title:Changing the Training Data Distribution to Reduce Simplicity Bias Improves In-distribution Generalization Authors:Dang Nguyen, Paymon Haddad, Eric Gan, Baharan Mirzasoleiman View a PDF of the paper titled Changing the Training Data Distribution to Reduce Simplicity Bias Improves In-distribution Generalization, by Dang Nguyen and 3 other authors View PDF HTML (experimental) Abstract:Can we modify the training data distribution to encourage the underlying optimization method toward finding solutions with superior generalization performance on in-distribution data? In this work, we approach this question for the first time by comparing the inductive bias of gradient descent (GD) with that of sharpness-aware minimization (SAM). By studying a two-layer CNN, we rigorously prove that SAM learns different features more uniformly, particularly in early epochs. That is, SAM is less susceptible to simplicity bias compared to GD. We also show that examples containing features that are learned early are separable from the rest based on the model's output. Based on this observation, we propose a method that (i) clusters examples based on the network output early in training, (ii) identifies a cluster of examples with similar network output, and (iii) upsamples the rest of examples only once to alleviate the simplicity bias. We show empirically that ...