[2510.08102] Lossless Vocabulary Reduction for Auto-Regressive Language Models
Summary
This paper introduces a theoretical framework for lossless vocabulary reduction in auto-regressive language models, enabling efficient cooperation between models with different tokenizations without sacrificing accuracy.
Why It Matters
The ability to reduce vocabulary size while maintaining accuracy is crucial for enhancing the efficiency of language models. This framework allows for better interoperability among models, which is significant for applications in natural language processing and machine learning, particularly in ensemble methods.
Key Takeaways
- Establishes a framework for lossless vocabulary reduction.
- Enables auto-regressive models to cooperate despite different tokenizations.
- Demonstrates empirical applicability in model ensemble scenarios.
Computer Science > Computation and Language arXiv:2510.08102 (cs) [Submitted on 9 Oct 2025 (v1), last revised 18 Feb 2026 (this version, v2)] Title:Lossless Vocabulary Reduction for Auto-Regressive Language Models Authors:Daiki Chijiwa, Taku Hasegawa, Kyosuke Nishida, Shin'ya Yamaguchi, Tomoya Ohba, Tamao Sakao, Susumu Takeuchi View a PDF of the paper titled Lossless Vocabulary Reduction for Auto-Regressive Language Models, by Daiki Chijiwa and 6 other authors View PDF Abstract:Tokenization -- the process of decomposing a given text into a sequence of subwords called tokens -- is one of the key components in the development of language models. Particularly, auto-regressive language models generate texts token by token, i.e., by predicting the next-token distribution given the previous ones, and thus tokenization directly affects their efficiency in text generation. Since each language model has their own vocabulary as a set of possible tokens, they struggle to cooperate with each other at the level of next-token distributions such as model ensemble. In this paper, we establish a theoretical framework of lossless vocabulary reduction, which efficiently converts a given auto-regressive language model into the one with an arbitrarily small vocabulary without any loss in accuracy. This framework allows language models with different tokenization to cooperate with each other efficiently by reduction to their maximal common vocabulary. Specifically, we empirically demonstrate it...