[2603.16629] MLLM-based Textual Explanations for Face Comparison
About this article
Abstract page for arXiv paper 2603.16629: MLLM-based Textual Explanations for Face Comparison
Computer Science > Computer Vision and Pattern Recognition arXiv:2603.16629 (cs) [Submitted on 17 Mar 2026 (v1), last revised 26 Mar 2026 (this version, v3)] Title:MLLM-based Textual Explanations for Face Comparison Authors:Redwan Sony, Anil K Jain, Arun Ross View a PDF of the paper titled MLLM-based Textual Explanations for Face Comparison, by Redwan Sony and 2 other authors View PDF HTML (experimental) Abstract:Multimodal Large Language Models (MLLMs) have recently been proposed as a means to generate natural-language explanations for face recognition decisions. While such explanations facilitate human interpretability, their reliability on unconstrained face images remains underexplored. In this work, we systematically analyze MLLM-generated explanations for the unconstrained face verification task on the challenging IJB-S dataset, with a particular focus on extreme pose variation and surveillance imagery. Our results show that even when MLLMs produce correct verification decisions, the accompanying explanations frequently rely on non-verifiable or hallucinated facial attributes that are not supported by visual evidence. We further study the effect of incorporating information from traditional face recognition systems, viz., scores and decisions, alongside the input images. Although such information improves categorical verification performance, it does not consistently lead to faithful explanations. To evaluate the explanations beyond decision accuracy, we introduce a ...