[2510.15982] AMiD: Knowledge Distillation for LLMs with $α$-mixture Assistant Distribution
About this article
Abstract page for arXiv paper 2510.15982: AMiD: Knowledge Distillation for LLMs with $α$-mixture Assistant Distribution
Computer Science > Machine Learning arXiv:2510.15982 (cs) [Submitted on 13 Oct 2025 (v1), last revised 4 Mar 2026 (this version, v2)] Title:AMiD: Knowledge Distillation for LLMs with $α$-mixture Assistant Distribution Authors:Donghyeok Shin, Yeongmin Kim, Suhyeon Jo, Byeonghu Na, Il-Chul Moon View a PDF of the paper titled AMiD: Knowledge Distillation for LLMs with $\alpha$-mixture Assistant Distribution, by Donghyeok Shin and 4 other authors View PDF HTML (experimental) Abstract:Autoregressive large language models (LLMs) have achieved remarkable improvement across many tasks but incur high computational and memory costs. Knowledge distillation (KD) mitigates this issue by transferring knowledge from a large teacher to a smaller student through distributional alignment. Previous studies have proposed various discrepancy metrics, but the capacity gap and training instability caused by near-zero probabilities, stemming from the high-dimensional output of LLMs, remain fundamental limitations. To overcome these challenges, several approaches implicitly or explicitly incorporating assistant distribution have recently been proposed. However, the past proposals of assistant distributions have been a fragmented approach without a systematic investigation of the interpolation path and the divergence. This paper proposes $\alpha$-mixture assistant distribution, a novel generalized family of assistant distributions, and $\alpha$-mixture distillation, coined AMiD, a unified framework...