[2511.05271] DeepEyesV2: Toward Agentic Multimodal Model
About this article
Abstract page for arXiv paper 2511.05271: DeepEyesV2: Toward Agentic Multimodal Model
Computer Science > Computer Vision and Pattern Recognition arXiv:2511.05271 (cs) [Submitted on 7 Nov 2025 (v1), last revised 27 Feb 2026 (this version, v3)] Title:DeepEyesV2: Toward Agentic Multimodal Model Authors:Jack Hong, Chenxiao Zhao, ChengLin Zhu, Weiheng Lu, Guohai Xu, Xing Yu View a PDF of the paper titled DeepEyesV2: Toward Agentic Multimodal Model, by Jack Hong and 5 other authors View PDF HTML (experimental) Abstract:Agentic multimodal models should not only comprehend text and images, but also actively invoke external tools, such as code execution environments and web search, and integrate these operations into reasoning. In this work, we introduce DeepEyesV2 and explore how to build an agentic multimodal model from the perspectives of data construction, training methods, and model evaluation. We observe that direct reinforcement learning alone fails to induce robust tool-use behavior. This phenomenon motivates a two-stage training pipeline: a cold-start stage to establish tool-use patterns, and reinforcement learning stage to further refine tool invocation. We curate a diverse, moderately challenging training dataset, specifically including examples where tool use is beneficial. We further introduce RealX-Bench, a comprehensive benchmark designed to evaluate real-world multimodal reasoning, which inherently requires the integration of multiple capabilities, including perception, search, and reasoning. We evaluate DeepEyesV2 on RealX-Bench and other representa...