[2603.01875] KDFlow: A User-Friendly and Efficient Knowledge Distillation Framework for Large Language Models
About this article
Abstract page for arXiv paper 2603.01875: KDFlow: A User-Friendly and Efficient Knowledge Distillation Framework for Large Language Models
Computer Science > Computation and Language arXiv:2603.01875 (cs) [Submitted on 2 Mar 2026] Title:KDFlow: A User-Friendly and Efficient Knowledge Distillation Framework for Large Language Models Authors:Songming Zhang, Xue Zhang, Tong Zhang, Bojie Hu, Yufeng Chen, Jinan Xu View a PDF of the paper titled KDFlow: A User-Friendly and Efficient Knowledge Distillation Framework for Large Language Models, by Songming Zhang and 5 other authors View PDF HTML (experimental) Abstract:Knowledge distillation (KD) is an essential technique to compress large language models (LLMs) into smaller ones. However, despite the distinct roles of the student model and the teacher model in KD, most existing frameworks still use a homogeneous training backend (e.g., FSDP and DeepSpeed) for both models, leading to suboptimal training efficiency. In this paper, we present a novel framework for LLM distillation, termed \textbf{KDFlow}, which features a decoupled architecture and employs SGLang for teacher inference. By bridging the training efficiency of FSDP2 and the inference efficiency of SGLang, KDFlow achieves full utilization of both advantages in a unified system. Moreover, instead of transferring full logits across different processes, our framework only transmits the teacher's hidden states using zero-copy data transfer and recomputes the logits on the student side, effectively balancing the communication cost and KD performance. Furthermore, our framework supports both off-policy and on-pol...