[2510.26577] Inference-Cost-Aware Dynamic Tree Construction for Efficient Inference in Large Language Models
Summary
The paper presents CAST, a dynamic tree decoding approach that enhances inference efficiency in large language models by considering inference costs related to GPU configurations and batch sizes.
Why It Matters
As large language models become increasingly prevalent, optimizing their inference speed is crucial for practical applications. This research addresses significant latency issues by introducing a method that adapts to system variables, potentially improving performance across various tasks and setups.
Key Takeaways
- CAST improves inference speed in large language models by up to 5.2 times compared to traditional methods.
- The approach dynamically adjusts tree structures based on GPU configurations and batch sizes.
- Experimental results show that CAST outperforms existing techniques by 5% to 20% across diverse tasks.
- Speculative decoding is leveraged to enhance token generation and validation.
- The code for CAST is publicly available, promoting further research and application.
Computer Science > Computation and Language arXiv:2510.26577 (cs) [Submitted on 30 Oct 2025 (v1), last revised 26 Feb 2026 (this version, v2)] Title:Inference-Cost-Aware Dynamic Tree Construction for Efficient Inference in Large Language Models Authors:Yinrong Hong, Zhiquan Tan, Kai Hu View a PDF of the paper titled Inference-Cost-Aware Dynamic Tree Construction for Efficient Inference in Large Language Models, by Yinrong Hong and 2 other authors View PDF HTML (experimental) Abstract:Large Language Models (LLMs) face significant inference latency challenges stemming from their autoregressive design and large size. To address this, speculative decoding emerges as a solution, enabling the simultaneous generation and validation of multiple tokens. While recent approaches like EAGLE-2 and EAGLE-3 improve speculative decoding using dynamic tree structures, they often neglect the impact of crucial system variables such as GPU devices and batch sizes. Therefore, we introduce a new dynamic tree decoding approach called CAST that takes into account inference costs, including factors such as GPU configurations and batch sizes, to dynamically refine the tree structure. Through comprehensive experimentation across six diverse tasks and utilizing six distinct LLMs, our methodology demonstrates remarkable results, achieving speeds up to 5.2 times faster than conventional decoding methods. Moreover, it generally outperforms existing state-of-the-art techniques from 5 % to 20%. The code i...