[2602.20816] Don't Ignore the Tail: Decoupling top-K Probabilities for Efficient Language Model Distillation
Summary
The paper presents a novel approach to language model distillation by introducing a tail-aware divergence that enhances the influence of lower-probability predictions, improving efficiency and performance in model training.
Why It Matters
This research addresses a critical limitation in traditional language model distillation methods, which often overlook valuable information from less probable outputs. By focusing on the tail of the probability distribution, this approach can lead to more robust language models, making it particularly relevant for researchers and practitioners in NLP and machine learning.
Key Takeaways
- Introduces a tail-aware divergence for language model distillation.
- Enhances the contribution of lower-probability predictions.
- Maintains computational efficiency similar to traditional KL divergence.
- Demonstrates competitive performance across various datasets.
- Can be implemented with modest computational resources.
Computer Science > Computation and Language arXiv:2602.20816 (cs) [Submitted on 24 Feb 2026] Title:Don't Ignore the Tail: Decoupling top-K Probabilities for Efficient Language Model Distillation Authors:Sayantan Dasgupta, Trevor Cohn, Timothy Baldwin View a PDF of the paper titled Don't Ignore the Tail: Decoupling top-K Probabilities for Efficient Language Model Distillation, by Sayantan Dasgupta and 2 other authors View PDF HTML (experimental) Abstract:The core learning signal used in language model distillation is the standard Kullback-Leibler (KL) divergence between the student and teacher distributions. Traditional KL divergence tends to be dominated by the next tokens with the highest probabilities, i.e., the teacher's modes, thereby diminishing the influence of less probable yet potentially informative components of the output distribution. We propose a new tail-aware divergence that decouples the contribution of the teacher model's top-K predicted probabilities from that of lower-probability predictions, while maintaining the same computational profile as the KL Divergence. Our decoupled approach reduces the impact of the teacher modes and, consequently, increases the contribution of the tail of the distribution. Experimental results demonstrate that our modified distillation method yields competitive performance in both pre-training and supervised distillation of decoder models across various datasets. Furthermore, the distillation process is efficient and can be p...