[2512.14954] Cross-Tokenizer Likelihood Scoring Algorithms for Language Model Distillation
About this article
Abstract page for arXiv paper 2512.14954: Cross-Tokenizer Likelihood Scoring Algorithms for Language Model Distillation
Computer Science > Computation and Language arXiv:2512.14954 (cs) [Submitted on 16 Dec 2025 (v1), last revised 6 May 2026 (this version, v2)] Title:Cross-Tokenizer Likelihood Scoring Algorithms for Language Model Distillation Authors:Buu Phan, Ashish Khisti, Karen Ullrich View a PDF of the paper titled Cross-Tokenizer Likelihood Scoring Algorithms for Language Model Distillation, by Buu Phan and 2 other authors View PDF HTML (experimental) Abstract:Computing next-token likelihood ratios between two language models (LMs) is a standard task in training paradigms such as knowledge distillation. Since this requires both models to share the same probability space, it becomes challenging when the teacher and student LMs use different tokenizers, for instance, when edge-device deployment necessitates a smaller vocabulary size to lower memory overhead. This work addresses this vocabulary misalignment problem by uncovering an implicit recursive structure in the commonly deployed Byte-Pair Encoding (BPE) algorithm and utilizing it to create a probabilistic framework for cross-tokenizer likelihood scoring. Our method enables sequence likelihood evaluation for vocabularies different from the teacher model native tokenizer, addressing two specific scenarios: when the student vocabulary is a subset of the teacher vocabulary, and the general case where it is arbitrary. In the subset regime, our framework computes exact likelihoods and provides next-token probabilities for sequential samp...