[2510.07959] DISCO: Diversifying Sample Condensation for Efficient Model Evaluation
About this article
Abstract page for arXiv paper 2510.07959: DISCO: Diversifying Sample Condensation for Efficient Model Evaluation
Computer Science > Machine Learning arXiv:2510.07959 (cs) [Submitted on 9 Oct 2025 (v1), last revised 28 Feb 2026 (this version, v2)] Title:DISCO: Diversifying Sample Condensation for Efficient Model Evaluation Authors:Alexander Rubinstein, Benjamin Raible, Martin Gubri, Seong Joon Oh View a PDF of the paper titled DISCO: Diversifying Sample Condensation for Efficient Model Evaluation, by Alexander Rubinstein and 3 other authors View PDF HTML (experimental) Abstract:Evaluating modern machine learning models has become prohibitively expensive. Benchmarks such as LMMs-Eval and HELM demand thousands of GPU hours per model. Costly evaluation reduces inclusivity, slows the cycle of innovation, and worsens environmental impact. The typical approach follows two steps. First, select an anchor subset of data. Second, train a mapping from the accuracy on this subset to the final test result. The drawback is that anchor selection depends on clustering, which can be complex and sensitive to design choices. We argue that promoting diversity among samples is not essential; what matters is to select samples that $\textit{maximise diversity in model responses}$. Our method, $\textbf{Diversifying Sample Condensation (DISCO)}$, selects the top-k samples with the greatest model disagreements. This uses greedy, sample-wise statistics rather than global clustering. The approach is conceptually simpler. From a theoretical view, inter-model disagreement provides an information-theoretically opti...