[2602.23881] LK Losses: Direct Acceptance Rate Optimization for Speculative Decoding
About this article
Abstract page for arXiv paper 2602.23881: LK Losses: Direct Acceptance Rate Optimization for Speculative Decoding
Computer Science > Machine Learning arXiv:2602.23881 (cs) [Submitted on 27 Feb 2026] Title:LK Losses: Direct Acceptance Rate Optimization for Speculative Decoding Authors:Alexander Samarin, Sergei Krutikov, Anton Shevtsov, Sergei Skvortsov, Filipp Fisin, Alexander Golubev View a PDF of the paper titled LK Losses: Direct Acceptance Rate Optimization for Speculative Decoding, by Alexander Samarin and 5 other authors View PDF HTML (experimental) Abstract:Speculative decoding accelerates autoregressive large language model (LLM) inference by using a lightweight draft model to propose candidate tokens that are then verified in parallel by the target model. The speedup is significantly determined by the acceptance rate, yet standard training minimizes Kullback-Leibler (KL) divergence as a proxy objective. While KL divergence and acceptance rate share the same global optimum, small draft models, having limited capacity, typically converge to suboptimal solutions where minimizing KL does not guarantee maximizing acceptance rate. To address this issue, we propose LK losses, special training objectives that directly target acceptance rate. Comprehensive experiments across four draft architectures and six target models, ranging from 8B to 685B parameters, demonstrate consistent improvements in acceptance metrics across all configurations compared to the standard KL-based training. We evaluate our approach on general, coding and math domains and report gains of up to 8-10% in average ...