[2603.26611] Benchmarking Tabular Foundation Models for Conditional Density Estimation in Regression
About this article
Abstract page for arXiv paper 2603.26611: Benchmarking Tabular Foundation Models for Conditional Density Estimation in Regression
Computer Science > Machine Learning arXiv:2603.26611 (cs) [Submitted on 27 Mar 2026] Title:Benchmarking Tabular Foundation Models for Conditional Density Estimation in Regression Authors:Rafael Izbicki, Pedro L. C. Rodrigues View a PDF of the paper titled Benchmarking Tabular Foundation Models for Conditional Density Estimation in Regression, by Rafael Izbicki and Pedro L. C. Rodrigues View PDF HTML (experimental) Abstract:Conditional density estimation (CDE) - recovering the full conditional distribution of a response given tabular covariates - is essential in settings with heteroscedasticity, multimodality, or asymmetric uncertainty. Recent tabular foundation models, such as TabPFN and TabICL, naturally produce predictive distributions, but their effectiveness as general-purpose CDE methods has not been systematically evaluated, unlike their performance for point prediction, which is well studied. We benchmark three tabular foundation model variants against a diverse set of parametric, tree-based, and neural CDE baselines on 39 real-world datasets, across training sizes from 50 to 20,000, using six metrics covering density accuracy, calibration, and computation time. Across all sample sizes, foundation models achieve the best CDE loss, log-likelihood, and CRPS on the large majority of datasets tested. Calibration is competitive at small sample sizes but, for some metrics and datasets, lags behind task-specific neural baselines at larger sample sizes, suggesting that post...