[2604.06685] ChemVLR: Prioritizing Reasoning in Perception for Chemical Vision-Language Understanding
About this article
Abstract page for arXiv paper 2604.06685: ChemVLR: Prioritizing Reasoning in Perception for Chemical Vision-Language Understanding
Computer Science > Computation and Language arXiv:2604.06685 (cs) [Submitted on 8 Apr 2026] Title:ChemVLR: Prioritizing Reasoning in Perception for Chemical Vision-Language Understanding Authors:Xuanle Zhao, Xinyuan Cai, Xiang Cheng, Xiuyi Chen, Bo Xu View a PDF of the paper titled ChemVLR: Prioritizing Reasoning in Perception for Chemical Vision-Language Understanding, by Xuanle Zhao and 4 other authors View PDF HTML (experimental) Abstract:While Vision-Language Models (VLMs) have demonstrated significant potential in chemical visual understanding, current models are predominantly optimized for direct visual question-answering tasks. This paradigm often results in "black-box" systems that fail to utilize the inherent capability of Large Language Models (LLMs) to infer underlying reaction mechanisms. In this work, we introduce ChemVLR, a chemical VLM designed to prioritize reasoning within the perception process. Unlike conventional chemical VLMs, ChemVLR analyzes visual inputs in a fine-grained manner by explicitly identifying granular chemical descriptors, such as functional groups, prior to generating answers. This approach ensures the production of explicit and interpretable reasoning paths for complex visual chemical problems. To facilitate this methodology, we implement a cross-modality reverse-engineering strategy, combined with a rigorous filtering pipeline, to curate a large-scale reasoning-and-captioning dataset comprising 760k high-quality samples across molecul...