[2604.00086] Hierarchical Pre-Training of Vision Encoders with Large Language Models

[2604.00086] Hierarchical Pre-Training of Vision Encoders with Large Language Models

arXiv - AI 4 min read

About this article

Abstract page for arXiv paper 2604.00086: Hierarchical Pre-Training of Vision Encoders with Large Language Models

Computer Science > Computer Vision and Pattern Recognition arXiv:2604.00086 (cs) [Submitted on 31 Mar 2026] Title:Hierarchical Pre-Training of Vision Encoders with Large Language Models Authors:Eugene Lee, Ting-Yu Chang, Jui-Huang Tsai, Jiajie Diao, Chen-Yi Lee View a PDF of the paper titled Hierarchical Pre-Training of Vision Encoders with Large Language Models, by Eugene Lee and 4 other authors View PDF HTML (experimental) Abstract:The field of computer vision has experienced significant advancements through scalable vision encoders and multimodal pre-training frameworks. However, existing approaches often treat vision encoders and large language models (LLMs) as independent modules, limiting the integration of hierarchical visual features. In this work, we propose HIVE (Hierarchical Pre-Training of Vision Encoders), a novel framework that enhances vision-language alignment by introducing hierarchical cross-attention between the vision encoder and LLM. Unlike conventional methods that flatten image embeddings, HIVE enables structured feature fusion across multiple layers, improving gradient flow and representation learning. To optimize this interaction, we introduce a three-stage training strategy that progressively aligns the vision encoder with the LLM, ensuring stable optimization and effective multimodal fusion. Empirical evaluations demonstrate that HIVE achieves superior performance not only in image classification but also on various vision-language tasks, outperf...

Originally published on April 02, 2026. Curated by AI News.

Related Articles

Llms

[D] thoughts on current community moving away from heavy math?

I don't know about how you guys feel but even before LLM started, many papers are already leaning on empirical findings, architecture des...

Reddit - Machine Learning · 1 min ·
Gemini is making it faster for distressed users to reach mental health resources  | The Verge
Llms

Gemini is making it faster for distressed users to reach mental health resources  | The Verge

The update follows a wrongful death lawsuit alleging Gemini ‘coached’ a man to die by suicide.

The Verge - AI · 4 min ·
Anthropic Claude AI training model targets AI skills gap | ETIH EdTech News
Llms

Anthropic Claude AI training model targets AI skills gap | ETIH EdTech News

AI in education, edtech AI tools, and AI skills training drive Anthropic’s Claude curriculum. ETIH edtech news covers how AI fluency, wor...

AI Tools & Products · 6 min ·
I use ChatGPT every day — I stick to these 3 rules to protect my privacy
Llms

I use ChatGPT every day — I stick to these 3 rules to protect my privacy

I stick to three essential rules whenever I open up a new chat in ChatGPT to always protect my privacy and keep my data secure

AI Tools & Products · 9 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime