Community Evals: Because we're done trusting black-box leaderboards over the community
About this article
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
Back to Articles Community Evals: Because we're done trusting black-box leaderboards over the community Published February 4, 2026 Update on GitHub Upvote 67 +61 ben burtenshaw burtenshaw Follow Nathan Habib SaylorTwift Follow Bertrand Chevrier kramp Follow merve merve Follow Daniel van Strien davanstrien Follow Niels Rogge nielsr Follow Julien Chaumond julien-c Follow TL;DR: Benchmark datasets on Hugging Face can now host leaderboards. Models store their own eval scores. Everything links together. The community can submit results via PR. Verified badges prove that the results can be reproduced. Evaluation is broken Let's be real about where we are with evals in 2026. MMLU is saturated above 91%. GSM8K hit 94%+. HumanEval is conquered. Yet some models that ace benchmarks still can't reliably browse the web, write production code, or handle multi-step tasks without hallucinating, based on usage reports. There is a clear gap between benchmark scores and real-world performance. Furthermore, there is another gap within reported benchmark scores. Multiple sources report different results. From Model Cards, to papers, to evaluation platforms, there is no alignment in reported scores. The result is that the community lacks a single source of truth. What We're Shipping Decentralized and transparent evaluation reporting. We are going to take evaluations on the Hugging Face Hub in a new direction by decentralizing reporting and allowing the entire community to openly report scores f...