[2603.22206] Chimera: Latency- and Performance-Aware Multi-agent Serving for Heterogeneous LLMs
About this article
Abstract page for arXiv paper 2603.22206: Chimera: Latency- and Performance-Aware Multi-agent Serving for Heterogeneous LLMs
Computer Science > Machine Learning arXiv:2603.22206 (cs) [Submitted on 23 Mar 2026] Title:Chimera: Latency- and Performance-Aware Multi-agent Serving for Heterogeneous LLMs Authors:Kangqi Ni, Wenyue Hua, Xiaoxiang Shi, Jiang Guo, Shiyu Chang, Tianlong Chen View a PDF of the paper titled Chimera: Latency- and Performance-Aware Multi-agent Serving for Heterogeneous LLMs, by Kangqi Ni and 5 other authors View PDF HTML (experimental) Abstract:Multi-agent applications often execute complex tasks as multi-stage workflows, where each stage is an LLM call whose output becomes part of context for subsequent steps. Existing LLM serving systems largely assume homogeneous clusters with identical model replicas. This design overlooks the potential of heterogeneous deployments, where models of different sizes and capabilities enable finer trade-offs between latency and performance. However, heterogeneity introduces new challenges in scheduling across models with diverse throughput and performance. We present Chimera, a predictive scheduling system for multi-agent workflow serving on heterogeneous LLM clusters that jointly improves end-to-end latency and task performance. Chimera applies semantic routing to estimate per-model confidence scores for each request, predicts the total remaining output length of the workflow, and estimates per-model congestion using in-flight predicted token volumes for load balancing. We evaluate Chimera on representative agentic workflows for code generatio...