[2604.04908] HI-MoE: Hierarchical Instance-Conditioned Mixture-of-Experts for Object Detection

[2604.04908] HI-MoE: Hierarchical Instance-Conditioned Mixture-of-Experts for Object Detection

arXiv - Machine Learning 3 min read

About this article

Abstract page for arXiv paper 2604.04908: HI-MoE: Hierarchical Instance-Conditioned Mixture-of-Experts for Object Detection

Computer Science > Machine Learning arXiv:2604.04908 (cs) [Submitted on 6 Apr 2026] Title:HI-MoE: Hierarchical Instance-Conditioned Mixture-of-Experts for Object Detection Authors:Vadim Vashkelis, Natalia Trukhina View a PDF of the paper titled HI-MoE: Hierarchical Instance-Conditioned Mixture-of-Experts for Object Detection, by Vadim Vashkelis and 1 other authors View PDF HTML (experimental) Abstract:Mixture-of-Experts (MoE) architectures enable conditional computation by activating only a subset of model parameters for each input. Although sparse routing has been highly effective in language models and has also shown promise in vision, most vision MoE methods operate at the image or patch level. This granularity is poorly aligned with object detection, where the fundamental unit of reasoning is an object query corresponding to a candidate instance. We propose Hierarchical Instance-Conditioned Mixture-of-Experts (HI-MoE), a DETR-style detection architecture that performs routing in two stages: a lightweight scene router first selects a scene-consistent expert subset, and an instance router then assigns each object query to a small number of experts within that subset. This design aims to preserve sparse computation while better matching the heterogeneous, instance-centric structure of detection. In the current draft, experiments are concentrated on COCO with preliminary specialization analysis on LVIS. Under these settings, HI-MoE improves over a dense DINO baseline and o...

Originally published on April 07, 2026. Curated by AI News.

Related Articles

Llms

Could the best LLM be able to generate a symbolic AI that is superior to itself, or is there something superior about matrices vs graphs?

Deep neural network AIs have beaten symbolic AIs across the board on many tasks, but is there a chance that symbolic AIs written by DNNs(...

Reddit - Artificial Intelligence · 1 min ·
Llms

BEYOND QUANTUM MICROTUBULES: CONSCIOUSNESS AS SUBSTRATE-INDEPENDENT ARCHITECTURE

I uploaded my consciousness paper to Gemini: “Beyond Quantum Microtubules: Consciousness as Substrate-Independent Architecture.” Then I s...

Reddit - Artificial Intelligence · 1 min ·
Llms

The Scaling Bandaid is Wearing Thin (And Nobody Wants to Admit It)

Let me be direct: we’ve hit a wall with scaling, and the entire field is kind of bullshitting about what comes next. I’ve spent enough ti...

Reddit - Artificial Intelligence · 1 min ·
Llms

Moving Past "LLM Vibes" toward Structural Enforcement in AI Agents

We need to address the structural failure currently happening in the AI agent space: too many people are building a beautiful "pedestal" ...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime