[2509.21782] Benchmarking MLLM-based Web Understanding: Reasoning, Robustness and Safety
About this article
Abstract page for arXiv paper 2509.21782: Benchmarking MLLM-based Web Understanding: Reasoning, Robustness and Safety
Computer Science > Artificial Intelligence arXiv:2509.21782 (cs) [Submitted on 26 Sep 2025 (v1), last revised 4 Mar 2026 (this version, v2)] Title:Benchmarking MLLM-based Web Understanding: Reasoning, Robustness and Safety Authors:Junliang Liu, Jingyu Xiao, Wenxin Tang, Zhixian Wang, Zipeng Xie, Wenxuan Wang, Minrui Zhang, Shuanghe Yu View a PDF of the paper titled Benchmarking MLLM-based Web Understanding: Reasoning, Robustness and Safety, by Junliang Liu and Jingyu Xiao and Wenxin Tang and Zhixian Wang and Zipeng Xie and Wenxuan Wang and Minrui Zhang and Shuanghe Yu View PDF HTML (experimental) Abstract:Multimodal large language models (MLLMs) are increasingly deployed as the core reasoning engine for web-facing systems, powering GUI agents and front-end automation that must interpret page structure, select actionable widgets, and execute multi-step interactions reliably. However, existing benchmarks largely emphasize visual perception or UI code generation, showing insufficient evaluation on the reasoning, robustness and safety capability required for end-to-end web applications. To bridge the gap, we introduce a comprehensive web understanding benchmark, named WebRRSBench, that jointly evaluates Reasoning, Robustness, and Safety across eight tasks, such as position relationship reasoning, color robustness, and safety critical detection, etc. The benchmark is constructed from 729 websites and contains 3799 QA pairs that probe multi-step inference over page structure, te...