A guide to setting up your own Hugging Face leaderboard: an end-to-end example with Vectara's hallucination leaderboard
About this article
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
Back to Articles A guide to setting up your own Hugging Face leaderboard: an end-to-end example with Vectara's hallucination leaderboard Published January 12, 2024 Update on GitHub Upvote 8 +2 Ofer Mendelevitch ofermend Follow guest Minseok Bae minseokbae Follow guest Clémentine Fourrier clefourrier Follow Hugging Face’s Open LLM Leaderboard (originally created by Ed Beeching and Lewis Tunstall, and maintained by Nathan Habib and Clémentine Fourrier) is well known for tracking the performance of open source LLMs, comparing their performance in a variety of tasks, such as TruthfulQA or HellaSwag. This has been of tremendous value to the open-source community, as it provides a way for practitioners to keep track of the best open-source models. In late 2023, at Vectara we introduced the Hughes Hallucination Evaluation Model (HHEM), an open-source model for measuring the extent to which an LLM hallucinates (generates text that is nonsensical or unfaithful to the provided source content). Covering both open source models like Llama 2 or Mistral 7B, as well as commercial models like OpenAI’s GPT-4, Anthropic Claude, or Google’s Gemini, this model highlighted the stark differences that currently exist between models in terms of their likelihood to hallucinate. As we continue to add new models to HHEM, we were looking for an open-source solution to manage and update the HHEM leaderboard. Quite recently, the Hugging Face leaderboard team released leaderboard templates (here and her...