[2406.17115] Measuring the Measurers: Quality Evaluation of Hallucination Benchmarks for Large Vision-Language Models
Summary
This article evaluates the quality of hallucination benchmarks for Large Vision-Language Models (LVLMs) and introduces a new framework for assessing their reliability and validity.
Why It Matters
As LVLMs become increasingly prevalent in AI applications, understanding and mitigating hallucinations is crucial for ensuring their reliability and safety. This research highlights existing evaluation gaps and proposes a new benchmark that can enhance future model assessments.
Key Takeaways
- Existing hallucination benchmarks for LVLMs may yield inconsistent results.
- The proposed Hallucination benchmark Quality Measurement (HQM) framework assesses benchmark reliability and validity.
- A new High-Quality Hallucination benchmark (HQH) demonstrates improved evaluation capabilities.
- Severe hallucination issues were identified in popular LVLMs, necessitating model improvements.
- The research emphasizes the importance of reliable evaluation tools in AI safety.
Computer Science > Computer Vision and Pattern Recognition arXiv:2406.17115 (cs) [Submitted on 24 Jun 2024 (v1), last revised 25 Feb 2026 (this version, v3)] Title:Measuring the Measurers: Quality Evaluation of Hallucination Benchmarks for Large Vision-Language Models Authors:Bei Yan, Jie Zhang, Zheng Yuan, Shiguang Shan, Xilin Chen View a PDF of the paper titled Measuring the Measurers: Quality Evaluation of Hallucination Benchmarks for Large Vision-Language Models, by Bei Yan and 4 other authors View PDF HTML (experimental) Abstract:Despite the outstanding performance in multimodal tasks, Large Vision-Language Models (LVLMs) have been plagued by the issue of hallucination, i.e., generating content that is inconsistent with the corresponding visual inputs. While previous works have proposed various benchmarks to evaluate this issue, the quality of these evaluations remains unverified. We observe that some of these benchmarks may produce inconsistent evaluation results across repeated tests or fail to align with human evaluation. To address this, we propose a Hallucination benchmark Quality Measurement framework (HQM), which leverages specific indicators to assess both reliability and validity. Our empirical analysis using HQM reveals and pinpoints potential evaluation issues in existing benchmarks, exposing a critical gap in current hallucination evaluation. To bridge this gap, we propose HQH, a High-Quality Hallucination benchmark, which demonstrates superior reliability...