[2604.00086] Hierarchical Pre-Training of Vision Encoders with Large Language Models
About this article
Abstract page for arXiv paper 2604.00086: Hierarchical Pre-Training of Vision Encoders with Large Language Models
Computer Science > Computer Vision and Pattern Recognition arXiv:2604.00086 (cs) [Submitted on 31 Mar 2026] Title:Hierarchical Pre-Training of Vision Encoders with Large Language Models Authors:Eugene Lee, Ting-Yu Chang, Jui-Huang Tsai, Jiajie Diao, Chen-Yi Lee View a PDF of the paper titled Hierarchical Pre-Training of Vision Encoders with Large Language Models, by Eugene Lee and 4 other authors View PDF HTML (experimental) Abstract:The field of computer vision has experienced significant advancements through scalable vision encoders and multimodal pre-training frameworks. However, existing approaches often treat vision encoders and large language models (LLMs) as independent modules, limiting the integration of hierarchical visual features. In this work, we propose HIVE (Hierarchical Pre-Training of Vision Encoders), a novel framework that enhances vision-language alignment by introducing hierarchical cross-attention between the vision encoder and LLM. Unlike conventional methods that flatten image embeddings, HIVE enables structured feature fusion across multiple layers, improving gradient flow and representation learning. To optimize this interaction, we introduce a three-stage training strategy that progressively aligns the vision encoder with the LLM, ensuring stable optimization and effective multimodal fusion. Empirical evaluations demonstrate that HIVE achieves superior performance not only in image classification but also on various vision-language tasks, outperf...