[2509.25837] Distillation of Large Language Models via Concrete Score Matching
About this article
Abstract page for arXiv paper 2509.25837: Distillation of Large Language Models via Concrete Score Matching
Computer Science > Machine Learning arXiv:2509.25837 (cs) [Submitted on 30 Sep 2025 (v1), last revised 1 Mar 2026 (this version, v2)] Title:Distillation of Large Language Models via Concrete Score Matching Authors:Yeongmin Kim, Donghyeok Shin, Mina Kang, Byeonghu Na, Il-Chul Moon View a PDF of the paper titled Distillation of Large Language Models via Concrete Score Matching, by Yeongmin Kim and 4 other authors View PDF HTML (experimental) Abstract:Large language models (LLMs) deliver remarkable performance but are costly to deploy, motivating knowledge distillation (KD) for efficient inference. Existing KD objectives typically match student and teacher probabilities via softmax, which blurs valuable logit information. While direct logit distillation (DLD) mitigates softmax smoothing, it fails to account for logit shift invariance, thereby restricting the solution space. We propose Concrete Score Distillation (CSD), a discrete score-matching objective that overcomes both softmax-induced smoothing and restrictions on the optimal solution set. We resolve the training instability and quadratic complexity of discrete score-matching in autoregressive LLMs, and the resulting CSD objective aligns relative logit differences across all vocabulary pairs between student and teacher with flexible weighting. We provide both mode-seeking and mode-covering instances within our framework and evaluate CSD on task-agnostic instruction-following and task-specific distillation using GPT-2-1.5...