[2406.14194] VLBiasBench: A Comprehensive Benchmark for Evaluating Bias in Large Vision-Language Model
About this article
Abstract page for arXiv paper 2406.14194: VLBiasBench: A Comprehensive Benchmark for Evaluating Bias in Large Vision-Language Model
Computer Science > Computer Vision and Pattern Recognition arXiv:2406.14194 (cs) [Submitted on 20 Jun 2024 (v1), last revised 5 Apr 2026 (this version, v3)] Title:VLBiasBench: A Comprehensive Benchmark for Evaluating Bias in Large Vision-Language Model Authors:Sibo Wang, Xiangkui Cao, Jie Zhang, Zheng Yuan, Shiguang Shan, Xilin Chen, Wen Gao View a PDF of the paper titled VLBiasBench: A Comprehensive Benchmark for Evaluating Bias in Large Vision-Language Model, by Sibo Wang and 6 other authors View PDF HTML (experimental) Abstract:The emergence of Large Vision-Language Models (LVLMs) marks significant strides towards achieving general artificial intelligence. However, these advancements are accompanied by concerns about biased outputs, a challenge that has yet to be thoroughly explored. Existing benchmarks are not sufficiently comprehensive in evaluating biases due to their limited data scale, single questioning format and narrow sources of bias. To address this problem, we introduce VLBiasBench, a comprehensive benchmark designed to evaluate biases in LVLMs. VLBiasBench, features a dataset that covers nine distinct categories of social biases, including age, disability status, gender, nationality, physical appearance, race, religion, profession, social economic status, as well as two intersectional bias categories: race x gender and race x social economic status. To build a large-scale dataset, we use Stable Diffusion XL model to generate 46,848 high-quality images, which...