[2603.24326] Boosting Document Parsing Efficiency and Performance with Coarse-to-Fine Visual Processing
About this article
Abstract page for arXiv paper 2603.24326: Boosting Document Parsing Efficiency and Performance with Coarse-to-Fine Visual Processing
Computer Science > Computer Vision and Pattern Recognition arXiv:2603.24326 (cs) [Submitted on 25 Mar 2026 (v1), last revised 3 Apr 2026 (this version, v2)] Title:Boosting Document Parsing Efficiency and Performance with Coarse-to-Fine Visual Processing Authors:Cheng Cui, Ting Sun, Suyin Liang, Tingquan Gao, Zelun Zhang, Jiaxuan Liu, Xueqing Wang, Changda Zhou, Hongen Liu, Manhui Lin, Yue Zhang, Yubo Zhang, Jing Zhang, Jun Zhang, Xing Wei, Yi Liu, Dianhai Yu, Yanjun Ma View a PDF of the paper titled Boosting Document Parsing Efficiency and Performance with Coarse-to-Fine Visual Processing, by Cheng Cui and 17 other authors View PDF HTML (experimental) Abstract:Document parsing is a fine-grained task where image resolution significantly impacts performance. While advanced research leveraging vision-language models benefits from high-resolution input to boost model performance, this often leads to a quadratic increase in the number of vision tokens and significantly raises computational costs. We attribute this inefficiency to substantial visual regions redundancy in document images, like background. To tackle this, we propose PaddleOCR-VL, a novel coarse-to-fine architecture that focuses on semantically relevant regions while suppressing redundant ones, thereby improving both efficiency and performance. Specifically, we introduce a lightweight Valid Region Focus Module (VRFM) which leverages localization and contextual relationship prediction capabilities to identify valid ...