[2602.05096] Visual concept ranking uncovers medical shortcuts used by large multimodal models
Summary
This article presents a method called Visual Concept Ranking (VCR) to identify visual concepts in large multimodal models, focusing on their performance in medical tasks, particularly skin lesion classification.
Why It Matters
The reliability of machine learning models in healthcare is critical, especially in safety-sensitive areas. This research highlights gaps in model performance across demographic groups, emphasizing the need for robust auditing methods to ensure equitable healthcare outcomes.
Key Takeaways
- Visual Concept Ranking (VCR) identifies key visual concepts in multimodal models.
- The study reveals performance disparities in medical tasks based on demographic factors.
- VCR allows for hypothesis generation regarding visual feature dependencies.
- Manual interventions validate the hypotheses generated by VCR.
- The findings underscore the importance of auditing AI models in healthcare.
Computer Science > Computer Vision and Pattern Recognition arXiv:2602.05096 (cs) [Submitted on 4 Feb 2026 (v1), last revised 13 Feb 2026 (this version, v2)] Title:Visual concept ranking uncovers medical shortcuts used by large multimodal models Authors:Joseph D. Janizek, Sonnet Xu, Junayd Lateef, Roxana Daneshjou View a PDF of the paper titled Visual concept ranking uncovers medical shortcuts used by large multimodal models, by Joseph D. Janizek and 3 other authors View PDF HTML (experimental) Abstract:Ensuring the reliability of machine learning models in safety-critical domains such as healthcare requires auditing methods that can uncover model shortcomings. We introduce a method for identifying important visual concepts within large multimodal models (LMMs) and use it to investigate the behaviors these models exhibit when prompted with medical tasks. We primarily focus on the task of classifying malignant skin lesions from clinical dermatology images, with supplemental experiments including both chest radiographs and natural images. After showing how LMMs display unexpected gaps in performance between different demographic subgroups when prompted with demonstrating examples, we apply our method, Visual Concept Ranking (VCR), to these models and prompts. VCR generates hypotheses related to different visual feature dependencies, which we are then able to validate with manual interventions. Subjects: Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG)...