[2603.24787] ReLope: KL-Regularized LoRA Probes for Multimodal LLM Routing
About this article
Abstract page for arXiv paper 2603.24787: ReLope: KL-Regularized LoRA Probes for Multimodal LLM Routing
Computer Science > Artificial Intelligence arXiv:2603.24787 (cs) [Submitted on 25 Mar 2026] Title:ReLope: KL-Regularized LoRA Probes for Multimodal LLM Routing Authors:Yaopei Zeng, Congchao Wang, Blake JianHang Chen, Lu Lin View a PDF of the paper titled ReLope: KL-Regularized LoRA Probes for Multimodal LLM Routing, by Yaopei Zeng and 3 other authors View PDF HTML (experimental) Abstract:Routing has emerged as a promising strategy for balancing performance and cost in large language model (LLM) systems that combine lightweight models with powerful but expensive large models. Recent studies show that \emph{probe routing}, which predicts the correctness of a small model using its hidden states, provides an effective solution in text-only LLMs. However, we observe that these probes degrade substantially when applied to multimodal LLMs (MLLMs). Through empirical analysis, we find that the presence of visual inputs weakens the separability of correctness signals in hidden states, making them harder to extract using standard probe designs. To address this challenge, we introduce two complementary approaches for improving probe routing in MLLMs. First, we propose the \emph{Attention Probe}, which aggregates hidden states from the preceding layer based on attention scores to recover distributed correctness signals. Second, we present the \emph{KL-Regularized LoRA Probe (ReLope)}, which inserts a lightweight LoRA adapter and applies a KL regularizer to learn routing-aware represent...