The Hallucinations Leaderboard, an Open Effort to Measure Hallucinations in Large Language Models
About this article
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
Back to Articles The Hallucinations Leaderboard, an Open Effort to Measure Hallucinations in Large Language Models Published January 29, 2024 Update on GitHub Upvote 36 +30 Pasquale Minervini pminervini Follow guest Ping Nie pingnieuk Follow guest Clémentine Fourrier clefourrier Follow Rohit Saxena rohitsaxena Follow guest Aryo Pradipta Gema aryopg Follow guest Xuanli He zodiache Follow guest In the rapidly evolving field of Natural Language Processing (NLP), Large Language Models (LLMs) have become central to AI's ability to understand and generate human language. However, a significant challenge that persists is their tendency to hallucinate — i.e., producing content that may not align with real-world facts or the user's input. With the constant release of new open-source models, identifying the most reliable ones, particularly in terms of their propensity to generate hallucinated content, becomes crucial. The Hallucinations Leaderboard aims to address this problem: it is a comprehensive platform that evaluates a wide array of LLMs against benchmarks specifically designed to assess hallucination-related issues via in-context learning. UPDATE -- We released a paper on this project; you can find it in arxiv: The Hallucinations Leaderboard -- An Open Effort to Measure Hallucinations in Large Language Models. Here's also the Hugging Face paper page for community discussions. The Hallucinations Leaderboard is an open and ongoing project: if you have any ideas, comments, or fe...