[2603.29928] ScoringBench: A Benchmark for Evaluating Tabular Foundation Models with Proper Scoring Rules

[2603.29928] ScoringBench: A Benchmark for Evaluating Tabular Foundation Models with Proper Scoring Rules

arXiv - AI 4 min read

About this article

Abstract page for arXiv paper 2603.29928: ScoringBench: A Benchmark for Evaluating Tabular Foundation Models with Proper Scoring Rules

Computer Science > Artificial Intelligence arXiv:2603.29928 (cs) [Submitted on 31 Mar 2026] Title:ScoringBench: A Benchmark for Evaluating Tabular Foundation Models with Proper Scoring Rules Authors:Jonas Landsgesell, Pascal Knoll View a PDF of the paper titled ScoringBench: A Benchmark for Evaluating Tabular Foundation Models with Proper Scoring Rules, by Jonas Landsgesell and Pascal Knoll View PDF HTML (experimental) Abstract:Tabular foundation models such as TabPFN and TabICL already produce full predictive distributions yet prevailing regression benchmarks evaluate them almost exclusively via point estimate metrics RMSE R2 These aggregate measures often obscure model performance in the tails of the distribution a critical deficit for high stakes decision making in domains like finance and clinical research where asymmetric risk profiles are the norm We introduce ScoringBench an open benchmark that computes a comprehensive suite of proper scoring rules like CRPS CRLS Interval Score Energy Score weighted CRPS and Brier Score alongside standard point metrics providing a richer picture of probabilistic forecast quality We evaluate realTabPFNv2.5 fine tuned with different scoring rule objectives and TabICL relative to untuned realTabPFNv2.5 across a suite of regression benchmarks Our results confirm that model rankings depend on the chosen scoring rule and that no single pretraining objective is universally optimal This demonstrates that for applications sensitive to extrem...

Originally published on April 01, 2026. Curated by AI News.

Related Articles

Llms

Gemini caught a $280M crypto exploit before it hit the news, then retracted it as a hallucination because I couldn't verify it - because the news hadn't dropped yet

So this happened mere hours ago and I feel like I genuinely stumbled onto something worth documenting for people interested in AI behavio...

Reddit - Artificial Intelligence · 1 min ·
Llms

GPT-4 vs Claude vs Gemini for coding — honest breakdown after 3 months of daily use

I am a solo developer who has been using all three seriously. Here is what I actually think: GPT-4o — Strengths: Large context window, st...

Reddit - Artificial Intelligence · 1 min ·
Llms

You're giving feedback on a new version of ChatGPT

So I will be paying attention to these system messages more now- the last time I got one of these not so long back the 'tone' changed to ...

Reddit - Artificial Intelligence · 1 min ·
Llms

Gemma 4 actually running usable on an Android phone (not llama.cpp)

I wanted a real local assistant on my phone, not a demo. First tried the usual llama.cpp in Termux — Gemma 4 was 2–3 tok/s and the phone ...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime