[2508.16165] Investigating Multimodal Large Language Models to Support Usability Evaluation
About this article
Abstract page for arXiv paper 2508.16165: Investigating Multimodal Large Language Models to Support Usability Evaluation
Computer Science > Software Engineering arXiv:2508.16165 (cs) [Submitted on 22 Aug 2025 (v1), last revised 10 Apr 2026 (this version, v2)] Title:Investigating Multimodal Large Language Models to Support Usability Evaluation Authors:Sebastian Lubos, Alexander Felfernig, Damian Garber, Gerhard Leitner, Julian Schwazer, Manuel Henrich View a PDF of the paper titled Investigating Multimodal Large Language Models to Support Usability Evaluation, by Sebastian Lubos and 5 other authors View PDF HTML (experimental) Abstract:Usability evaluation is an essential method to support the design of effective and intuitive user interfaces (UIs). However, it commonly relies on resource-intensive, expert-driven methods, which limit its accessibility, especially for small organizations. Recent multimodal large language models (MLLMs) have the potential to support usability evaluation by analyzing textual instructions together with visual UI context. This paper investigates the use of MLLMs as assistive tools for usability evaluation by framing the task as a prioritization problem. It identifies and explains usability issues and ranks them by severity. We report a study that compares the evaluations generated by multiple MLLMs with assessments from usability experts. The results demonstrate that MLLMs can offer complementary insights and support the efficient prioritization of critical issues. Additionally, we present an interactive visualization tool that enables the transparent review and v...