[2603.04247] Online Learning for Multi-Layer Hierarchical Inference under Partial and Policy-Dependent Feedback
About this article
Abstract page for arXiv paper 2603.04247: Online Learning for Multi-Layer Hierarchical Inference under Partial and Policy-Dependent Feedback
Computer Science > Machine Learning arXiv:2603.04247 (cs) [Submitted on 4 Mar 2026] Title:Online Learning for Multi-Layer Hierarchical Inference under Partial and Policy-Dependent Feedback Authors:Haoran Zhang, Seohyeon Cha, Hasan Burhan Beytur, Kevin S Chan, Gustavo de Veciana, Haris Vikalo View a PDF of the paper titled Online Learning for Multi-Layer Hierarchical Inference under Partial and Policy-Dependent Feedback, by Haoran Zhang and 5 other authors View PDF HTML (experimental) Abstract:Hierarchical inference systems route tasks across multiple computational layers, where each node may either finalize a prediction locally or offload the task to a node in the next layer for further processing. Learning optimal routing policies in such systems is challenging: inference loss is defined recursively across layers, while feedback on prediction error is revealed only at a terminal oracle layer. This induces a partial, policy-dependent feedback structure in which observability probabilities decay with depth, causing importance-weighted estimators to suffer from amplified variance. We study online routing for multi-layer hierarchical inference under long-term resource constraints and terminal-only feedback. We formalize the recursive loss structure and show that naive importance-weighted contextual bandit methods become unstable as feedback probability decays along the hierarchy. To address this, we develop a variance-reduced EXP4-based algorithm integrated with Lyapunov opti...