[2603.29928] ScoringBench: A Benchmark for Evaluating Tabular Foundation Models with Proper Scoring Rules
About this article
Abstract page for arXiv paper 2603.29928: ScoringBench: A Benchmark for Evaluating Tabular Foundation Models with Proper Scoring Rules
Computer Science > Artificial Intelligence arXiv:2603.29928 (cs) [Submitted on 31 Mar 2026] Title:ScoringBench: A Benchmark for Evaluating Tabular Foundation Models with Proper Scoring Rules Authors:Jonas Landsgesell, Pascal Knoll View a PDF of the paper titled ScoringBench: A Benchmark for Evaluating Tabular Foundation Models with Proper Scoring Rules, by Jonas Landsgesell and Pascal Knoll View PDF HTML (experimental) Abstract:Tabular foundation models such as TabPFN and TabICL already produce full predictive distributions yet prevailing regression benchmarks evaluate them almost exclusively via point estimate metrics RMSE R2 These aggregate measures often obscure model performance in the tails of the distribution a critical deficit for high stakes decision making in domains like finance and clinical research where asymmetric risk profiles are the norm We introduce ScoringBench an open benchmark that computes a comprehensive suite of proper scoring rules like CRPS CRLS Interval Score Energy Score weighted CRPS and Brier Score alongside standard point metrics providing a richer picture of probabilistic forecast quality We evaluate realTabPFNv2.5 fine tuned with different scoring rule objectives and TabICL relative to untuned realTabPFNv2.5 across a suite of regression benchmarks Our results confirm that model rankings depend on the chosen scoring rule and that no single pretraining objective is universally optimal This demonstrates that for applications sensitive to extrem...