[2507.07999] Traceable Evidence Enhanced Visual Grounded Reasoning: Evaluation and Methodology
About this article
Abstract page for arXiv paper 2507.07999: Traceable Evidence Enhanced Visual Grounded Reasoning: Evaluation and Methodology
Computer Science > Computer Vision and Pattern Recognition arXiv:2507.07999 (cs) [Submitted on 10 Jul 2025 (v1), last revised 5 Mar 2026 (this version, v2)] Title:Traceable Evidence Enhanced Visual Grounded Reasoning: Evaluation and Methodology Authors:Haochen Wang, Xiangtai Li, Zilong Huang, Anran Wang, Jiacong Wang, Tao Zhang, Jiani Zheng, Sule Bai, Zijian Kang, Jiashi Feng, Zhuochen Wang, Zhaoxiang Zhang View a PDF of the paper titled Traceable Evidence Enhanced Visual Grounded Reasoning: Evaluation and Methodology, by Haochen Wang and Xiangtai Li and Zilong Huang and Anran Wang and Jiacong Wang and Tao Zhang and Jiani Zheng and Sule Bai and Zijian Kang and Jiashi Feng and Zhuochen Wang and Zhaoxiang Zhang View PDF HTML (experimental) Abstract:Models like OpenAI-o3 pioneer visual grounded reasoning by dynamically referencing visual regions, just like human "thinking with images". However, no benchmark exists to evaluate these capabilities holistically. To bridge this gap, we propose TreeBench (Traceable Evidence Evaluation Benchmark), a diagnostic benchmark built on three principles: (1) focused visual perception of subtle targets in complex scenes, (2) traceable evidence via bounding box evaluation, and (3) second-order reasoning to test object interactions and spatial hierarchies beyond simple object localization. Prioritizing images with dense objects, we initially sample 1K high-quality images from SA-1B, and incorporate eight LMM experts to manually annotate questi...