[2510.18245] Scaling Laws Meet Model Architecture: Toward Inference-Efficient LLMs
About this article
Abstract page for arXiv paper 2510.18245: Scaling Laws Meet Model Architecture: Toward Inference-Efficient LLMs
Computer Science > Machine Learning arXiv:2510.18245 (cs) [Submitted on 21 Oct 2025 (v1), last revised 1 Mar 2026 (this version, v2)] Title:Scaling Laws Meet Model Architecture: Toward Inference-Efficient LLMs Authors:Song Bian, Tao Yu, Shivaram Venkataraman, Youngsuk Park View a PDF of the paper titled Scaling Laws Meet Model Architecture: Toward Inference-Efficient LLMs, by Song Bian and 3 other authors View PDF HTML (experimental) Abstract:Scaling the number of parameters and the size of training data has proven to be an effective strategy for improving large language model (LLM) performance. Yet, as these models grow increasingly powerful and widely deployed, the cost of inference has become a pressing concern. Despite its importance, the trade-off between model accuracy and inference efficiency remains underexplored. In this work, we examine how key architectural factors, hidden size, the allocation of parameters between MLP and attention (mlp-to-attention ratio), and grouped-query attention (GQA), influence both inference cost and accuracy. We introduce a conditional scaling law that augments the Chinchilla framework with architectural information, along with a search framework for identifying architectures that are simultaneously inference-efficient and accurate. To validate our approach, we train more than 200 models spanning 80M to 3B parameters and 8B to 100B training tokens, and fit the proposed conditional scaling law. Our results show that the conditional scal...