[2602.13940] You Can Learn Tokenization End-to-End with Reinforcement Learning
Summary
This paper explores an innovative approach to tokenization in large language models (LLMs) using reinforcement learning, demonstrating improved performance over traditional methods.
Why It Matters
Tokenization is a critical step in the training of LLMs, and optimizing this process can lead to more efficient models. The proposed method enhances the learning of token boundaries, potentially impacting the future of natural language processing and AI development.
Key Takeaways
- Introduces a new method for tokenization using reinforcement learning.
- Demonstrates improved performance over existing tokenization techniques.
- Highlights the importance of score function estimates in minimizing loss.
- Emphasizes the role of time discounting in reducing variance.
- Provides theoretical guarantees for the proposed approach.
Computer Science > Machine Learning arXiv:2602.13940 (cs) [Submitted on 15 Feb 2026] Title:You Can Learn Tokenization End-to-End with Reinforcement Learning Authors:Sam Dauncey, Roger Wattenhofer View a PDF of the paper titled You Can Learn Tokenization End-to-End with Reinforcement Learning, by Sam Dauncey and 1 other authors View PDF HTML (experimental) Abstract:Tokenization is a hardcoded compression step which remains in the training pipeline of Large Language Models (LLMs), despite a general trend towards architectures becoming increasingly end-to-end. Prior work has shown promising results at scale in bringing this compression step inside the LLMs' architecture with heuristics to draw token boundaries, and also attempts to learn these token boundaries with straight-through estimates, which treat the problem of drawing discrete token boundaries as a continuous one. We show that these token boundaries can instead be learned using score function estimates, which have tighter theoretical guarantees due to directly optimizing the problem of drawing discrete token boundaries to minimize loss. We observe that techniques from reinforcement learning, such as time discounting, are necessary to reduce the variance of this score function sufficiently to make it practicable. We demonstrate that the resultant method outperforms prior proposed straight-through estimates, both qualitatively and quantitatively at the $100$ million parameter scale. Subjects: Machine Learning (cs.LG); ...