[2603.20895] LLM Router: Prefill is All You Need
About this article
Abstract page for arXiv paper 2603.20895: LLM Router: Prefill is All You Need
Computer Science > Computation and Language arXiv:2603.20895 (cs) [Submitted on 21 Mar 2026] Title:LLM Router: Prefill is All You Need Authors:Tanay Varshney, Annie Surla, Michelle Xu, Gomathy Venkata Krishnan, Maximilian Jeblick, David Austin, Neal Vaidya, Davide Onofrio View a PDF of the paper titled LLM Router: Prefill is All You Need, by Tanay Varshney and 7 other authors View PDF HTML (experimental) Abstract:LLMs often share comparable benchmark accuracies, but their complementary performance across task subsets suggests that an Oracle router--a theoretical selector with perfect foresight--can significantly surpass standalone model accuracy by navigating model-specific strengths. While current routers rely on fragile semantic signals, we propose using internal prefill activations via Encoder-Target Decoupling--a functional separation between the model providing the predictive signal (the Encoder) and the model whose performance is being estimated (the Target). This allows optimized heterogeneous pairing between unique encoders and target models. We utilize Fisher Separability (J) and Effective Dimensionality (d_eff) as mathematical probes to isolate optimal layer-wise signals, providing the predictive foundation for our SharedTrunkNet architecture. SharedTrunkNet captures up to 45.58% of the accuracy gap between the strongest standalone model and the Oracle while achieving 74.31% cost savings relative to the highest-cost model. Subjects: Computation and Language (cs.C...